It happens every semester. A student triumphantly points out that Jean-Jacques Rousseau is undermining himself when he claims “the man who reflects is a depraved animal,” or that Ralph Waldo Emerson’s call for self-reliance is in effect a call for reliance on Emerson himself. Trying not to sound too weary, I ask the student to imagine that the authors had already considered these issues.
Instead of trying to find mistakes in the texts, I suggest we take the point of view that our authors created these apparent “contradictions” in order to get readers like us to ponder more interesting questions. How do we think about inequality and learning, for example, or how can we stand on our own feet while being open to inspiration from the world around us? Yes, there’s a certain satisfaction in being critical of our authors, but isn’t it more interesting to put ourselves in a frame of mind to find inspiration in them?
Being a student in the sciences, I don’t experience this kind of humanities phenomenon directly. But this ultra-critical mindset pervades everyday life, at least at an elite university. Students engage in this “intellectual one-upmanship” all the time without even realizing it. Try using Thomas Jefferson in a pro-freedom argument and you get the response that TJ owned slaves, thereby invalidating whatever moral or legal progress he allegedly made; therefore, the takeaway point is that the liberal notion of freedom was built on detestable foundations.
Also from Roth:
Liberal education in America has long been characterized by the intertwining of two traditions: of critical inquiry in pursuit of truth and exuberant performance in pursuit of excellence. In the last half-century, though, emphasis on inquiry has become dominant, and it has often been reduced to the ability to expose error and undermine belief. The inquirer has taken the guise of the sophisticated (often ironic) spectator, rather than the messy participant in continuing experiments or even the reverent beholder of great cultural achievements.
Even for my own blog posts, I sometimes run into critical comments which, instead of saying something substantive, completely miss the main point and belittle some small detail that I had usually already considered and addressed elsewhere in the article. One is powerless to defend against such criticisms, as preemptively placing ample amounts of caveats is no deterrent. It just changes the criticism from “The author does not consider X…” to “The author dismisses X…” followed by a pro-X argument, where X is a counterargument that the author has already considered.
Not that critical comments are bad—they’re quite useful. Constructive criticism is a hundred times more helpful than praise. Perhaps the issue is a self-fulfilling prophecy of blogging: since people don’t expect complex arguments with caveats, they assume that everything you say is absolute, even when that is clearly false. And it is not just in academia or blogging. Go to the comments page of any remotely controversial news story (I really enjoy reading CNN comments), and you can effortlessly predict which arguments and counterarguments are used.
“Critical” in this context means close or analytical, not disparaging or condemnatory. Thus, a critical reading of a text means a close or analytical reading of the text, not a disparaging or condemnatory reading. The “historical critical method” of interpreting the Christian Bible, for example, means a close or analytical reading of the text, not a disparaging or condemnatory reading. “Critical thinking” doesn’t mean “exposing error”, it means thinking analytically. I think they need a dictionary at Wesleyan. And I mean that in the critical sense.
And a response by “Austin Uhler”:
Your comment is an example of the type of thinking that the author is discouraging. While you are correct about the strict meaning of “critical” in this context, your uncharitable reading means you are missing the author’s point: it is becoming more common for students to take critical thinking down negative, dismissive and unproductive paths.
This is probably the best comment-response pair I have ever seen for a NYT article.
Is the hypercritical condition a legacy of postmodernism? Is it simply a byproduct of the Internet? Are we becoming more cynical? I don’t know.
Being hypercritical is certainly a better problem to have than being uncritical. I appreciated Roth’s article nonetheless, for addressing the overly critical crowd.
Everyone is sitting down in a crowded theater, comfortably seated and with a good view. All is well until one person decides his view is not good enough, so he stands up to get a clearer view. This ruins other peoples’ views, so they stand up as well. A while later, everyone is standing up but has the same view as before, resulting in each being in a position strictly worse than when everyone was sitting.
This particular example is typically avoided since the social norm in a theater is to sit. In fact, in numerous examples of this game, there are either direct (laws) or indirect (social norms) methods of control to prevent such disasters from happening. Here are two for illustration:
Crime. If one person stole a little, this person would be in a better position and society would not be harmed by much. However, if everyone did this, society would collapse. The criminalization of theft prevents this problem (for the most part). This concept applies to many types of crimes.
Environmentalism. If one person polluted more, there would be virtually no change to the environment. However, if everyone did so, the environment would feel the full effects. (This still isn’t quite resolved, but in most developed countries it is well on its way.)
From a game-theoretic perspective, however, each individual taking the selfish path is making a rational decision. The problem is that the system may not discourage the selfish activity sufficiently.
Someone who doesn’t recycle may (justifiably) argue that they do in fact care about the environment, but that the impact of their not recycling is negligible to the environment. While this is true, if everyone thought like this, then we would all be standing up in the theater. The main point of this post to go over some less commonly cited situations.
Studying for Tests
I would argue that studying for a test falls into the category of standing up in a theater. From both high school and college, I have observed or have heard of people studying hours upon hours for tests and often barely remembering any of the material after a semester. A test should measure how well you understand something, not how well you can memorize and cram facts into your brain for the next day.
People who know me from high school and college know I don’t study much (if at all, depending on the class) for tests. Perhaps some see this as a sign of not caring, but I would argue that I care about the knowledge just as much, if not more, than people who study far greater hours. In the cases where I do study, I go for the “why” rather than the “what,” and I study to load the concepts into long-term memory, rather than the details into short-term memory. If you do need the details at a later time, cram it in then when it is relevant and when you have the big-picture understanding.
Let’s pretend that studying for tests were not allowed. Then what would a test measure? Would it measure how much attention someone paid in lecture? How well they comprehended the main points? What part of the homework they didn’t copy from someone else?
In fact, everyone’s grades would still be similar. In classes where grades are curved, if everyone does “worse” on a test the same way, then the grades will be unaffected (though there may be some shifting around). The tests would just become more genuine.
So it may seem like I have something against studying for tests. But what part specifically of studying for tests do I have an issue with? Well, as mentioned before, I think if everyone studied for tests, it makes the test scores more a measure of who studied the most and who could cram in material the most efficiently, instead of who actually understood the content. But even if this problem were somehow irrelevant—letsay an irrefutable study comes out tomorrow saying that cramming ability is just as relevant for the real world as understanding—I would still have an issue with studying, namely the time spent. Suppose someone is taking 4 classes and studies 4 hours for each midterm and 8 hours for each final. That’s 48 hours spent studying in a semester. Multiply that by 8 semesters to get 16 days spent on studying. These 16 days are the difference between sitting down and standing up.
Preparing for Colleges/Job Interviews
Sure, the informative power of some of the tests I’ve mentioned above may be arguably above zero. For example, maybe it’s feasible that a dedicated premed student university should cram before a bio test because the details do matter, though the question remains of whether such a student will remember anything years later. But there’s still one very important test taken all around the country that really has no arguable intellectual merit: the SAT.
This test is probably the biggest insult to intelligence when taken seriously. I try my hardest to resist cringing whenever I hear smart people talking about their SAT scores. From the CollegeBoard site:
The SAT and SAT Subject Tests are a suite of tools designed to assess your academic readiness for college. These exams provide a path to opportunities, financial support and scholarships, in a way that’s fair to all students. The SAT and SAT Subject Tests keep pace with what colleges are looking for today, measuring the skills required for success in the 21st century.
And I’m sure that by “keep[ing] pace with what colleges are looking for today, measuring the skills required for success in the 21st century,” what CollegeBoard means is that the skills required for success in today’s world are… wealth, certain racial backgrounds, and access to prep courses.
Anyways, I guess my point is that if nobody studied for the SAT, nobody took prep courses, and no one cared so much, then:
Students wouldn’t be wasting their time studying for it.
Many families would save time and money on SAT prep by not having to do it.
As a result, less privileged students would stand a better chance, and thus the test would be more fair.
Of course, while this may sound good, it is easier said than done. To not study would be shooting yourself in the foot, or in this case, to sit down in the theater in which everyone is standing. It would be like one country’s reducing its greenhouse emissions while other countries are not decreasing theirs.
(Personally, I refused to study for the SAT, though at the time I had to give off the impression that I was studying for it to appease my Asian parents. If you really want the story, it’s in the latter part of this post.)
I would go further to say that preparing for job interviews in some ways fits this type of game. On this subject, however, I have very little experience as my only important interviews were of the type where it would be very difficult to prepare for, i.e., math puzzles. Answering such questions did not hinge on knowing certain advanced equations, but instead on using simple tools that almost everyone knows, in unusual ways.
In addition, I understand that an interview not only judges the answers to the questions, but also the interviewee’s character. If it is evident that someone prepared a lot for an interview, that fact in itself would be considered in the interviewer’s assessment. However, I think that in a world in which no one prepared for interviews, both sides would benefit as the interviewee would save time and stress while the interviewer gets a more genuine view of the interviewee, not a carefully constructed outer shell.
And for a preemptive defense, to the claim that studying or preparing is simply a result of competition, I have nothing against capitalism or competition. If anything, freeing up students’ time from studying for tests would make them be able to compete in other areas, and be able to take additional classes or learn new skills (I picked up programming while pretending to study for the SAT). I see the time wasted as an inefficiency. The point of not studying is to have more time, and hence be more productive.
Sitting down in a standing theater is a difficult decision. But if everyone sat down, we might all live in a better place.
Although the number of college graduates increased about 29% between 2001 and 2009, the number graduating with engineering degrees only increased 19%, according to the most recent statistics from the U.S. Dept. of Education. The number with computer and information-sciences degrees decreased 14%
After coming up with the topic for the post, I found this article from 2011 with a similar title and citing the same WSJ story. It argued that the high school teaching environment was not adequate in preparing students for rigorous classes in college.
In addition, the article includes the argument that in the math and sciences, answers are plain right or wrong, unlike in the humanities and social sciences.
I can agree with these two points, but I want to add a few more, with the perspective of year 2013. Also, I am going to narrow down the STEM group a bit more, to just include math and science. The main reason is that in the past years, the number of CS majors has actually increased rapidly. At Cornell, engineering classes can be massive and there does not seem to be a shortage of engineers. Walk into a non-introductory or non-engineering-oriented math class, however, and you can often count the number of students with your fingers. So even though STEM as a whole is in a non-optimal situation, engineering and technology (especially computer science) seem to be doing fine. So then the question remains.
Why Is America Leaving Math and Science Behind?
I mean this especially with regards to theoretical aspects of math and science, including academia and research.
In this situation, money is probably a big factor. The salary of a post-grad scientist (from one article at $37,000 to $45,000) is pitiful compared to that in industry (which can a median early-career salary of up to $95,000, depending on the subject, according to the same article). Essentially there is a lack of a tangible goal.
There are other factors besides money. Modern math and science can be quite intimidating. All major results that could be “easily” discovered have already been discovered. In modern theoretical physics, for instance, the only questions that remain are in the very large or the very small—there is little left to discover of “tabletop” physics, the physics that operates at our scale. Most remaining tasks are not problems in physics, but puzzles in engineering.
Modern mathematics is very similar. While there are many open questions in many fields, the important ones are highly abstract. Even stating a problem takes a tremendous amount of explanation. That is, it takes a long time to convey to someone what exactly it is you are trying to figure out. The math and science taught in high school is tremendously unhelpful in preparing someone to actually figure out new math and science, and it is thus difficult for an entering college student to adjust their views of what math/science are.
Even the reasons for going to college have changed. More thanever, students list their top reason for going to college as getting better job prospects rather than for personal or intellectual growth.
In addition, society seems more than before focused on immediate gain rather than long term investment. Academia’s contribution to society, especially in math and science, is often not felt until decades or even centuries after something was invented. Einstein’s theories of relativity had no practical application when he made them, but our gadgets now use relativity all the time. Classical Greece knew about prime numbers, but prime numbers were not useful until modern-age data encryption was required. Even a prolific academic could receive very little recognition in one’s own life.
However, with the rise of online social networks in the last several years, you can now see what your friends are up to and what they are accomplishing in real-time. This should at least have some psychological effect on pushing people towards a career where real, meaningful progress can be tracked in real-time. Doing something that will only possibly have an impact decades later seems to be the same as doing nothing.
Considering the sentiment of the last few paragraphs, it might sound like I am talking about the decline in humanities and liberal arts majors. Indeed, while the number of math and science majors is increasing (though not as much as in engineering/technology), it almost seems like the theoretical sides of math and science are closer in spirit to the humanities and liberal arts than they are to STEM. The point is not for immediate application of knowledge, but to make contributions to the overall human pool of knowledge, to make this knowledge available to future generations.
In all, the decline of interest in theoretical math/science is closely correlated with the decline of interest in the humanities/liberal arts. Our culture is fundamentally changing to one that values practicality far more than discovery. (For instance, when is NASA going to land a human on Mars? 2037. JFK might have had a different opinion.) Overall this is a good change, mainly in the sense of re-adjusting the educational demographics of the workforce to keep America relevant in the global economy. But, we should still hold some value to theory and discovery.
Every time papers, projects, and prelims come around, the campus stress level rises dramatically. Sleep is lost (or outright skipped), meals are avoided, and all activities other than studying are brushed off. This happens not once a semester but throughout, corresponding to large assignments for every class.
And every time this happens, it seems that many students are focused not on actually learning the content, but on scoring higher grades than others. Of course, this phenomenon occurs in certain majors (engineering) much more than others. And I would guess that it happens at Cornell at an above average rate compared to that of the typical university. But it raises some questions that I want to explore.
Just a couple of notes. First, this article will mainly focus on the math/science/engineering side. And second, I do not think I need to mention why Cornell should be concerned about student stress.
Should Competition Be GPA-Focused?
Competition to a certain degree is beneficial, and I think no one would argue with that. As a math major I know very well that competition leads to efficiency. But there is a line where the marginal benefit in efficiency is not worth the huge increase in stress levels, and in this respect I think Cornell has crossed the line for good.
In addition to the GPA competition, there is the additional factor that students are competing for jobs, internships, and research positions. I think the competition here is mostly fine (except regarding salaries vs societal contribution; this topic deserves its own post). Combined with academic competition, however, this induces a vast amount of anxiety and stress in the students.
Without mentioning names, I will list some of the extreme behaviors I have observed of people I know:
A student pulled multiple all-nighters in a row to finish a project. While this might be plausible in real life for a rare occasion such as a doctoral thesis or a billion dollar merger, the student did this regularly for his classes.
A student has problem sets due on four out of five of the weekdays, and spends literally all his time outside of class eating, sleeping, or doing problem sets. In one particular class, the problem sets he hands in are 10-20 pages per week.
A student took 50+ credits in one semester, though he claims to know of someone who took 61.
A student brought a sleeping bag, refrigerator, and energy drinks to one of the school computer labs, and pretty much lives there, returning to his living place once every few days to shower.
Interestingly enough, I think these particular students will do fine, as they seem to know their own abilities and limits, and more importantly, they are all aware of what they are doing. They are also all very smart people who can actually learn the material. This “top tier” of students is not really adversely affected by competition, since they are smart enough to excel regardless of whether competition exists. Moreover, these students don’t seem to be grade-focused: they learn the material, and the grade comes as a byproduct of learning.
The group I am actually concerned about is the second tier. (Note: I just realized after typing this how judgmental that statement sounds, but hey, from statistics, unless you define the first tier to include everyone, there must be a second tier.) This group I would define to be the smart people who don’t seem to understand how the first tier operates. They see the students in the first tier getting high grades and know those students are smart, so they think that if they prepare the tests well and get high grades, they will become as smart.
What they don’t realize is the difference in cause and effect. The first tier prioritizes understanding first and the grades come as a byproduct, whereas the second tier prioritizes grades first and hope to gain some understanding as a byproduct.
Again, just as a disclaimer, these are just arbitrary definitions for first and second tier I made up for this particular observation. I am not saying that this criterion is the final say, and of course, there are numerous other factors regarding how well one does in college.
But from my experience, it is precisely the students in this second tier who are stressed. They are the ones trying so desperately to beat the test instead of to learn the material. And they are the ones who make college seem so competitive, as you can always hear them talking about tests and what their friends got on the tests and how they are being graded and what the format of the test will be.
On the other hand, the student I mentioned above who lives de facto in one of Cornell’s computer labs—I have not once heard him talk about anything specifically regarding a test. The closest was talking about the material that was to be covered, but he was talking about stuff that was beyond what the class taught for the topic, stuff that he knew was not going to be on the test.
Some of you might be thinking, “That’s great, but what kind of job is he going to get if he is not grade-focused?” Good question. After working there for a summer internship, this same friend rejected a return offer from Goldman Sachs.
How to Break from GPA-centrism
I am not worried about this person’s career at all. I am worried about the second tier, the smart people doing mundane tasks, wasting a lot of potential creative brainpower that the world needs more than ever. Renewable energy, bioengineering, artificial intelligence, space exploration and colonization, nanotechnology—there are so many people here who would be excellent for these fields, yet many of them seem too bogged down by current competition-induced stress to even think into the future.
Indeed, this GPA competition is a force to be reckoned with, as it really is self perpetuating. Those students in other groups or who are apathetic to grades will tend to become more grade focused just from sitting in lecture, as there are always people who ask for as much details of the test as possible. I feel that this defeats the purpose of a test, which is to measure how well you know the material or how well you can apply a certain skill, not how much of the test structure you can memorize or how much content can you cram the night before.
Overall though, there are some measures that can be taken to reduce this kind of stress.
Reduce the importance of the GPA. I do not know if I would go so far as to remove it, though. For example, at Brown University, the GPA is not calculated. Somewhere in the middle ground should be best.
Stop showing score distributions, or show them only for major tests like a midterm/final. In many engineering classes at Cornell, the first thing that is requested after a test is graded is to see the score distribution (often in graph form), along with the mean, median, standard deviation, etc. In fact, this has become so common that it is now the first thing the professors put on their lecture slides. Moreover, the computer science department uses an online course management system which automatically tells students the mean, median, standard deviation, etc. for every single assignment, not just tests. Being a math major who would normally love extra statistics, I thought this was cool at first. But now I despise it—it is just too much information that I don’t need in order to learn the material, and it only detracts from my learning experience. And the way the page is setup, it is not something you can just ignore.
A side note: One of my classes in the CS department actually does not list grades, and I definitely feel more pressured to actually learn in that class, not more pressured to beat other people at grades like in other CS classes. Props to Professor John Hopcroft.
Teach better math/science/engineering/CS much earlier in the education system. A friend showed me this article today, a comparison between the CS education systems of the US and Vietnam, a comparison that is horrific for the US. If students already knew the foundations, then college would be what it was supposed to be: going really deep into a topic in a learning atmosphere, not treating us like elementary school children because, well, frankly that’s the level of engineering/CS of many college entrants. For instance, I think it’s great that students are trying to take Calculus 1 freshman year and then do engineering. Hard work is certainly a virtue. But wouldn’t it be much better for both the student and the college if they mastered calculus in high school? Imagine how much better our engineers would be.
What I envision is a class where students are trying to learn, not to beat each other on a test. I hope this vision is not too far-fetched.
You probably learned a bunch of things in school math about what you can and can’t do. When you were a first grader, perhaps you learned that that you can’t subtract 5 from 2, but later on, you learned about negatives: 2 – 5 = -3.
You also might have been told that you can’t divide 2 by 6, but then you learn about fractions. And by now, you are no doubt an expert at splitting 2 pizzas among 6 people.
Even so, there is much more that you may not have known…
When you played a “what’s the highest number?” game with someone, every time you said a number, they countered by saying your number plus one, that is, unless you said infinity. Because infinity plus one is still infinity, right?
Here is where ordinals come into play. The ordinal number (ω, omega) is defined as the first number after ALL of the positive integers. No matter what normal number they might say, whether it’s ten billion or a googol, the ordinal number is far, far larger. It is practically infinity.
But then you can add one to it, and it becomes an even bigger number. Add two, and it becomes even bigger.
What the heck is going on? If you count an infinite number of numbers after omega, you get two omega? Is this two times infinity? And then three omega? And then omega squared?
It turns out to keep on going. Eventually you will get , and then , etc. And then you reach (big omega), which is larger than all things that can be written in terms of little omegas. And then you can make bigger things than that, with no end.
So the next time someone claims infinity is the largest number, you can confidently reply, “infinity plus one.”
2. You Can’t Divide by Zero
Actually, under certain conditions, you can.
The field of complex analysis is largely based around taking contour integrals around poles. Another word for pole is singularity. And another word for singularity is something you get when you divide by zero.
Consider the function . When x is 1, y is 1, and when x is 5, y is 1/5. But what if x is 0? What happens? Well, 1/0 is undefined. However, if you look at a graph, you see that the function spikes up to infinity at x = 0.
What you do in complex analysis is integrate in a circle around that place where it spikes to infinity. The result in this case, if done properly, is . It’s quite bizarre.
3. You Can Only Understand Smooth Things
Actually, there is much theory on crazy, “pathological” functions, some of which are discontinuous at every point!
The image above is kind of misleading, as it is a graph of the Cantor function, which is actually continuous everywhere (!), but nonetheless manages to rise despite having zero derivative almost everywhere.
There is another function with the following properties: it is 1 whenever is x is rational and 0 whenever x is irrational. Yet this function is well understood and is even integrable. (The integral is 0.)
Then you have things that are truly crazy:
The boundary of that thing is nowhere smooth, and is one of the most amazing things that have ever been discovered. Yet it is generated by the extraordinarily simple function , which most people have seen and even studied in school.
4. You Must “Do the Math” and Not Draw Pictures
Actually, math people use pictures all the time. The Mandelbrot set (the previous picture) was not well understood until computer images were generated. There is no such thing as doing the math in a “correct” way. Some fields are quite based on pictures and visualizations.
How else would anyone have thought, for example, that the Mandelbrot set would be so complex? Without seeing that in pictures, how would we have realized the fundamental structure behind the self-similarity of nature?
Yeah, that’s a picture of broccoli. Not a mathematical function. Broccoli.
5. If It Doesn’t Make Sense, It’s Not True
Actually, many absurd things in math can be perfectly reasonable.
What’s the next number after 7?
8, you say. But why 8? What’s wrong with saying the next number after 7 is 0? In fact, I can define a “number” to only include 0, 1, 2, 3, 4, 5, 6, and 7. Basic operations such as addition and multiplication can be well defined. For example, addition is just counting forward that many numbers. So 6 + 3 = 1, because if you start at 6 and go forward 3, you loop back around and end up at 1.
Even weirder is the Banach-Tarski Paradox, which states a solid sphere can be broken up into a finite number of pieces, and the pieces can be reassembled to form TWO spheres of the exact same size as the original!
I hope this was understandable for everyone. May the reader live for ω+1 years!
One book I read on my trip to the University of Chicago was a compilation of Aims of Education addresses, which are given at the beginning of the year to first-year students at the University of Chicago. I’ll refer to the publication as The Aims of Education. (Note: This is NOT the book of the same name by Alfred North Whitehead.)
I had finished this book before stepping on campus, so it was good, in my mind at least, to have an idea of what the faculty thought of education and the university, especially the concept of liberal (or liberal arts) education and its meaning in the modern world. But the most compelling speech in the collection, I thought, was Andrew Abbott’s “Aims of Education Address” of 2002. (Here is an online text of the speech.)
The speaker is quite frank, admitting, “This is only the third or fourth such oration that I’ve given in my life. And you’re not an easy audience.”
He then congratulates the entering class, saying they have “already won.” That is:
[T]he real work predicting your future success is done not by prestige of college but by other factors—mainly the things for which you were admitted to that selective college in the first place—personal talents, past work, and parental resources both social and intellectual. The estimate of your future worldly success that we can make on the basis of knowing those things already will not be improved much by knowing what you actually do here. Moreover, admission itself sets up a self-fulfilling prophecy; since you got in here, people in the future will assume you’re good, no matter what you do or how you do while you are here.
Why did I embolden that sentence? Because Abbott is basically saying it doesn’t matter what you do once you’ve gotten in. Evidence?
[T]he best nationwide figures I have seen suggest that a one-full-point increment in college GPA—from 2.8 to 3.8, for example—is worth about an additional nine percent in income four years after college. Now that’s not much result for a huge amount of work.
I’m sorry to bore you with this income story but I want to kill the idea that hard work in higher education produces worldly success. The one college experience variable that actually does have some connection with later worldly success is major. But in the big nationwide studies, most of that effect comes through the connection between major and occupation…. But within the narrow range of occupation and achievement we have at the University of Chicago, there is really no strong relation between what you study and your later occupation in later life.
Here comes the interesting part. Abbott proceeds to give statistics as to which majors obtain which occupations, and which occupations are held by which majors, all from a sample of UChicago alumni.
Take the mathematics concentrators: 20 percent software development and support, 14 percent college professors, 10 percent in banking and finance, 7 percent secondary or elementary teachers, and seven percent in nonacademic research; the rest are scattered. Physics concentrators are similar, but more of them are engineers and fewer are bankers. Biology produces 40 percent doctors, 16 percent professors, 11 percent nonacademic researchers, and the other third scattered. Obviously, there are a number of seeming pathways here. All the science concentrators lead to professorships and nonacademic research. And biology and chemistry often lead to medicine. But there are also many diversions from those pathways. We’ve got a biology concentrator who is now a writer, another who is now a musician. We’ve got two mathematicians who are now lawyers, and a physics concentrator who is now a psychotherapist.
UChicago is known though for its economics program. What about it?
[T]his is today identified as overwhelmingly the most careerist major…24 percent in banking and finance, 15 percent in business consulting, 14 percent lawyers, 10 percent in business administration or sales, 7 percent in computers, and the other 30 percent scattered.
Other social sciences?
Historians are often lawyers (24 percent) and secondary teachers (15 percent), but the other 60 percent are all over the map. Political scientists have 24 percent lawyers, 7 percent each professors and government administrators, and perhaps 20 percent in the various business occupations; the rest are scattered. Psychologists, surprisingly, are also about 20 percent in the various business occupations, 11 percent lawyers, and 10 percent professors; the rest are scattered. Thus in the social sciences, the news is that there are lots of ways to got to law school and to get into business. And there are the usual unusuals: the sociology major who is an actuary, the two psychologists in government administration, the political science concentrator now in computers.
English majors have scattered to the four winds: 11 percent of them to elementary and secondary teaching, 10 percent to various business occupations, 9 percent to communications, 9 percent to lawyering, 5 percent to advertising; the rest scattered. Of the philosophers, 30 percent are lawyers and 18 percent software people. I defy anybody to make sense out of that. AGain, the connections include some obviousthings and some non-obvious things. We have two English majors who are now artists and one who is an architect. We have a philosophy major who is a farmer and two who are doctors.
Now, what’s really interesting is when the stats are seen the other way around, from occupations to majors.
Of the lawyers, 16 percent came from economics, 15 percent from political science, 12 percent from history, 7 percent each from philosophy, English, and psychology; and 5 percent from public policy. There was at least one lawyer from each of the following: anthropology, art and design, art history, biology, chemistry, East Asian languages and civilizations, fundamentals, general studies in the humanities, geography, geophysical sciences, Germanic languages and literatures, mathematics, physics, religion and humanities, Romance languages and literatures, Russian and other Slavic languages and literatures, and sociology. You get the point. There is absolutely no concentration from which you cannot become a lawyer.
He then goes on with doctors and bankers/financers, which have similarly large numbers of seeming anomalies. “What you do here does not determine your occupation in any way.”
The next point Abbott brings up is the perceived notion that although a person will in the future forget specific knowledge learned from college education, he or she will retain the general skills. He states, however, “Since this is the argument I have myself made most strongly in the past, I shall take special care to demolish it.”
College graduates, and especially those from elite colleges, are deemed to have these better “general skills.” But the question is whether or not college students already possessed such general skills when they entered. As Abbott put it:
While we do know that people acquire these skills over the four years they are in college, we are not at all clear that it is the experience of college instruction that produces them. First, the kinds of young people who go on to college, and certainly to elite colleges like this one, are quite different from those who do not. If in our analyses we do not have perfect statistical control for all those differences, college may appear to have effects that in fact really originate in the differences between those who go to college and those who don’t.
To this selection bias effect (as it is called), we can add the equally difficult problem of unmeasured variables, Changes that we might attribute to college instruction could actually derive from other things. College students are likely to have more challenging jobs, for example, than students who don’t go to college. They spend more time hanging out with smart people. They live in an environment where cognitive skills are explicitly valued. The differences of skill could be produced by these things rather than by the actual educational experience of the college classroom. Moreover, since many cognitive skills cannot be shown to differ seriously between those who have experienced college and those who have not, much of the skill increase could come from simple maturation. You could get more skilled just because you’ve lived a few more years.
In other words, even though there is a positive correlation between a college education and critical thinking skills (which he later discusses), the tests are not normalized for age, and the increase in critical thinking skill might be a result of maturation, not only college education.
I’m going to jump forward in the speech a few pages. In the skipped sections he discusses further why a college education may not be so useful, especially one from an elite university. Yet, it seems ironic that he would be saying this to a group of students who are just about to begin four years of their life at such a place. So what, then, is the aim of education?
So the long and short of it is that there is no instrumental reason to get an education, to study in your courses, or to pick a concentration and lose yourself in it. It won’t get you anything you won’t get anyway or get some other way. So forget everything you ever thought about all these instrumental reasons for getting an education.
The reason for getting an education here—or anywhere else—is that it is better to be educated than not to be. It is better in and of itself. Not because it is a means to some other end. It is better because it is better. Note that this statement implies that the phrase “aims of education” is nonsensical; education is not a thing of which aims can be predicated. It has no aim other than itself.
But surely education teaches us the skills to survive in a changing world! Abbott’s response: Not quite. “That is because the skills change, too. Writing was a far more important skill a century or even half a century ago than it is today.”
If education has no aim, then what is it?
By education I am going to mean the ability to make more and more complex, more and more profound and extensive, the meanings that we attach to events and phenomena. When we are reading a text, we call this adducing of new meanings interpretation. When we are doing mathematics, we call this giving of meaning intuition and proof. When we are reading history, we call it a sense of historical context. When we are doing social science, we call if the sociological imagination. In all these areas, to be educated is to have the habit of finding many and diverse new meanings to attach to whatever events or phenomena we examine. We have lots of standard routines for doing this—interpretive paradigms, heuristic methods, theoretical schemes, investigative disciplines, and so on. But education is not about these paradigms and methods and disciplines. Rather it is the instinctive habit of looking for new meanings, of questioning old ones, of perpetually playing with and fighting about the meanings we assign to events and texts and phenomena. We can teach you the paradigms and the methods, but we can’t teach you the habit of playing with them. That’s something you must find within yourself.
In this sense, education plays the opposite role of what we would conventionally expect: “Education doesn’t have aims. It is the aim of other things.” Now, there is one passage that I wish to quote, not because it follows directly in the argument, but because of a tie to something I read on the flight back from Chicago—Herman Hesse’s Siddhartha. First, Abbott:
As teachers, we try to entice you into this habit of education by a variety of exercises, just as a Zen monk tries to get a novice to achieve enlightenment by giving him a koan to meditate on. Note that the Zen koan is not enlightenment but rather is a means to enlightenment…. They are exercises we give you hoping that they will somehow help you find the flash of enlightenment that is education.
This is really the point of Siddhartha, the same lesson but taken from different angle. In Siddhartha, the main character by the same name rejects Gautama (the Buddha)’s teachings precisely because he knows he will not obtain the truth he seeks if he tries to learn it from someone else, even if that someone else is the Exalted One: he must experience it himself.
Back to Abbott’s speech. I’ll let Abbott give the concluding thoughts:
To put it simply, the system as it currently exists trusts you with the whole store. Education is the most valuable, the most human, and the most humane basis around which a person can build him- or herself. And you are here offered an unparalleled set of resources for finding the flash of enlightenment that kindles education within you. But it is in practice completely your decision whether you seek that flash. You can go through here and do nothing. Or you can go through here like a tourist, listening to lectures here and there, consulting your college Fodor’s for “important intellectual attractions” that “should not be missed during your stay.” Or you can go through here mechanically, stuffing yourself with materials and skills till you’re gorged with them. And whichever of these three you choose, you’ll do just fine in the world after you leave. You will be happy and you will be successful.
Or on the other hand you can seek education. It will not be easy. We have only helpful exercises for you. We can’t give you the thing itself. And there will be extraordinary temptations—to spend whole months wallowing in a concentration that doesn’t work for you because you have some myth about your future, to blow off intellectual effort in all but one area because you are too lazy to challenge yourself, to wander off to Europe for a year of enlightenment that rapidly turns into touristic self-indulgence. There will be the temptations of timidity, too, temptations to forgo all experimentation, to miss the glorious randomness of college, to give up the prodigal possibilities that—let me tell you—you will never find again; temptations to go rigidly through the motions and then wonder why education has eluded you.
There are no aims of education. The aim is education. If—and only if—you seek it . . . education will find you.
One interesting component of the International Baccalaureate (IB) is the Theory of Knowledge (TOK) class, a one-year course that, at my school, is taken in the second semester of junior year and the first semester of senior year. The reason I would describe it as an interesting component is that the class is so different, so bizarre in comparison to the other IB classes we take. Instead of teaching a set curriculum about a particular subject and then preparing for an end-of-year examination, TOK emphasizes thinking, or at least, the way we think, or the “ways of knowing.” It has some elements of a philosophy course, and though it does not completely qualify as one, our teacher Dr. Schaack would categorize it under “Applied Philosophy.”
Regarding the purpose of TOK, our teacher described it as in part to find out whether students can think. Thinking is quite a different activity from test-taking. The IB wants to make the most out of an individual, and one part of this is to tweak the way we think, or at least make us aware of different theories of knowledge. As such, it is an interdisciplinary course, where matters from all other subjects are discussed.
What does one do in TOK? This is a frequently asked question, and I had asked this myself to IB seniors several times last year. If I had to describe the class in three words, I would say, “discussion,” “thought,” and “application.” Discussion of what? Of almost any topic you can imagine. In just my class, I have heard and participated in discussions about current events, the subjectivity of knowledge, quantum mechanics and its relation to reality, the Iraq War, dreams, the three-second present moment, Nobel prizes and laureates, the fourth dimension, essay writing, college application, the authority of science, and much more. Our discussions have mainly been centered about two texts: Sophie’s World (1991) by Jostein Gaarder in junior year, and Zen and the Art of Motorcycle Maintenence: An Inquiry into Values (1974) by Robert Pirsig in senior year. These are only two of the many examined works (a list not limited to only books), and especially with the latter, we have had some extraordinarily thought-provoking discussions.
Other works studied in my class include: “Allegory of the Cave” by Plato, Waking Life (2001) by Richard Linklater, “The Dimension of the Present Moment” (1990) by Miroslav Holub, Flatland: A Romance of Many Dimensions (1884) by Edwin Abbott, Nobelity (2006) by Turk Pipkin, and “The Fourth Dimension,” a chapter of The New Ambidextrous Universe (1991) by Martin Gardner.
The second word, “thought,” necessarily accompanies the first. After all, one cannot participate in a high-level discussion without thinking about what to say. Thus, to participate, one must review what has been discussed already and then continue on or take a different path. It is a class where any relevant, insightful thought is welcomed.
Finally, on to “application.” What use are thoughts that do not apply to the world? In discussions, even of abstract concepts, we often cite concrete examples to demonstrate the implications of our ideas. Even if a topic does not directly affect our daily lives, for example the existence of black holes, a discussion of such in a talk about the advancement of science is more grounded than one that only refers to science in general.
In addition, two essays of prime importance are written in this class. The first is the Theory of Knowledge essay, an essay that I actually have added to this site (under Essays). It is an essay that allows the writer to select from ten possible choices and write more or less freely about it for around 1500-1600 words. It is also a mostly free-form essay, written in a highly personal manner; the teacher recommends up to three sources in the bibliography.
The Extended Essay, on the other hand, is a wholly different matter. It is essentially a research paper, on the topic of the student’s own choosing, and should contain at the minimum 10 sources (except for exceptional cases such as essays on mathematics and experimental sciences). The length is up to 4000 words. Mine is currently not finished yet. As of today, half the essay is due next class.
It can be said that TOK is an incredibly unique class. Simply, I have never before had such a thought-provoking and thought-changing experience.