Monday, April 30, 2012

Multiple Inheritance: Comparing Scala, Java, and C++

Apparently my last post on operator overloading in Scala and the comparison to C++ managed to ruffle a few feathers and got some discussion time at Reddit. There were some who felt that I just have a problem with C++. It should be known that I use C++. Most of my research code is written in C++. As it stands, if you want to get close to the machine and use a non-managed language, C and C++ are pretty much the only broadly used options out there.

The reality is that there is no perfect language, and adding features to languages often has the impact of making the syntax dirty. C++ happens to be a language with a rich history and a lot of added features. Java is moving that way as well. Currently Scala isn't there. Time will tell how Scala evolves. Being born with things like type parameters and function literals is helpful, but I fully expect our understanding of what makes for the ideal programming language to continue to grow. That will drive new language development and push existing languages to include new features.

The topic of this post is another feature that really struck me when I was reading "Programming in Scala". One of the other features that the creators of Java left out, partially based on lessons learned from C++, was multiple inheritance of classes. C++ gives programmers the ability to inherit from classes in any way they want. This causes problems in at least two ways. First, any time you inherit from more than one class, you have the possibility for ambiguity. Class D below inherits from both B and C. As such, it inherits a bar method from both so calls to bar on an instance of D are ambiguous by default. The result is lots of use of the scoping operator.

This figure shows an inheritance diamond. The definition of bar in both B and C also illustrates ambiguity without the diamond. This figure appears in my book.


The bigger problem arises when you have a diamond in your inheritance structure. The C++ model of inheritance gives subtypes a full copy of the supertype. So if there are two path to a supertype from a subtype, there will normally be two copies of that supertype in the subtype, and any reference to it will have to be disambiguated through the scoping operator. To get around this, C++ includes virtual inheritance. This can all be implemented in a manner that preserves speed, but it adds some complex details to the language, which a programmer can trip upon just by trying to set up what seems like a perfectly logical type hierarchy. Full discussion of these matters and how they can be implemented can be found in Multiple Inheritance for C++.

The reaction to this complexity in Java was to simply disallow multiple inheritance of classes. This approach had also been taken in Modula-3. Unlike Modula-3, the Java creators added a construct called an interface, which is basically a completely abstract class. This was done not only because of experience with C++, but also because researchers (such as Snyder) had pointed out that inheritance actually provides two separate features to a language. It doesn't just give implementations to the inheriting classes, it provides a common supertype for polymorphism. An interface only helps with the subtyping for polymorphism because it doesn't contain any implementation code.

This fixes all potential problems with the figure above as the only way to get such a setup requires that A is an interface and at least one of B and C is an interface. So implementations can only come from a single parent, and there are no ambiguities. You retain the ability to express that a type is a subtype of two supertypes, so it seems like all is good.

One of the things that happens with language development is that it often takes a long time to learn exactly what impact any given design decision will have. In the case of Java interfaces, there is a subtle nudge toward making thin interfaces, or interfaces that have few methods in them. The reason is that no one wants to use an interface with a lot of methods. When you implement in interface, you have to implement every method in it. If the interface includes 50 methods, that is a lot of coding. You can make a subclass with generally valid default implementations, but that can cause challenges later as the general advice is to program to an interface (from Effective Java), and it is easy for the abstract subclass to gain a few helpful methods that aren't in the interface. Things go downhill from there.

The creators of Scala went with a different approach. There is still only single inheritance of classes, but the interfaces are converted to traits, which do not have to be completely abstract. Like Java, you can inherit from many traits. To prevent the ambiguity problems of multiple inheritance in C++, traits are linearized, so that there is specific order of resolution. (This is not unique to Scala, though Scala might well be the most broadly used, statically typed language to do this.) The benefit of this approach over Java is that it is easy to create traits that have rich interfaces (lots of methods) where most are based on a small number of abstract methods. Anyone implementing the trait need only provide implementations of those few abstract methods. They can override others if desired for performance reasons, but they don't have to.

What is more, the approach taken in Scala opens up possibilities for new techniques. Traits can be constructed in ways that pass calls up to the supertype so that implementations can be mixed and matched, and changing the ordering of inheritance provides different behaviors. This is probably an example where a strength is also a weakness. Most developers today would not expect behavior to change based on the order of inheritance. That turns situations were it does into a "gotcha" logic error. It is still early days for Scala, so we will have to see how this shakes out. Unless you make calls on super, inheritance order doesn't matter one bit, and from my experience the benefits of rich interfaces have outweighed the downsides. However, I would still consider this something of an open question for Scala as a language.

Friday, April 27, 2012

Operator Overloading: Scala vs. C++

When I was reading "Programming in Scala" to learn the language, I came across some elements that reminded me of things that were in C++, but were left out of Java because they were seen to be dangerous and caused problems. One of these is operator overloading. This blog post is largely in response to a post called "'Operator Overloading' in Scala". That post gives a good description of the basics so I won't repeat it here. I mainly want to write about something I feel is missed, that is the reason why what Scala does is different from C++, and why Scala's version is less dangerous.

The problem people generally associate with operator overloading is that you can produce unreadable and confusing code. Something like overloading + to do multiplication. I think this is a poor description of the problem. You can write unreadable code without operator overloading as well. For example, you could write a Java method called add which does multiplication. No reasonable developer would do that, and allowing operator overloading doesn't suddenly make people stupid. The real problem with C++ operator overloading is that it is limited to normal operator names. To see why this matters, imagine a situation in Java where you were restricted to method names like "add", "subtract", "multiply", and "divide". Now you implement a vector class that needs methods like dot product and cross product. What do you do?

In C++ if you have any operations that are similar to mathematical operations, the language basically seduces you into using normal operators for them. It is easier to type * than to have a full method call. Perhaps * isn't a perfect fit, but it is the best you can use and you think it will make code easier to work with. The problem is that when other people use the code, the * looks just like any other multiplication and it isn't immediately obvious it is doing something different.

This is where Scala is a major improvement. Operators aren't limited to standard names. They can use any combination of operator symbol characters. So if you had the vector class mentioned above you could use *+* for dot product and *** for cross product. I won't argue those are great names. Indeed, I would probably argue against them. However, when another programmer sees a *+* b in code, he/she will know this is not normal multiplication, and know something needs to be looked up.

Scala collections, in my opinion, make good use of this. For example, consider a sequence called stuff and a single value called v. You can prepend and append to stuff with a +: stuff and stuff :+ a respectively. If moreStuff is another sequence, then stuff ++ moreStuff appends the two sequences. This type of usage is highly readable, but wouldn't be possible in the C++ model.

One of the great things about the Scala syntax is that these symbolic operators are not special cases. Any method that takes zero or one arguments can be used in operator notation. So instead of using *+* and ***, you could simply use dot and cross. In the code you could then write a dot b or a cross b. The only drawback to this approach is that the dot and cross methods will have lower precedence than +. So the expression c + a cross b will be seen as (c + a) cross b instead of c + (a cross b). If you use ***, the normal precedence rules will give you the latter form, which is what is normally expected in vector math.


In summary, I think that the creators of Java were wise to not follow in the footsteps of C++ with operator overloading. It really does push programmers into doing some silly things. However, Java was probably an overreaction. With the approach taken in Scala, we get a best of both worlds. You can have operators when you really have mathematical operations and you get the simplified, infix operator syntax. However, you aren't restricted to basic operator names, or even symbolic operators in order to use infix notation.

Sunday, April 15, 2012

A New Syllabus: Alternate Approach to CS1 and CS2

Executive Summary: I'm considering a complete rework of my syllabus and teaching methodology. The result will be pretty much no lecturing during class times. It would also do away with pencil and paper tests and quizzes. Those would be replaced with electronic exercises, oral exams, and other different formats for assessment.

Background
I've mentioned a few times earlier that I am considering making some very significant changes to my teaching style for next year. Some of this is prompted by the fact that having the book and making the videos for it automatically opens some new doors. Other influences come from things like KhanAcademy and various other things I have seen, heard, and read related to delivery of content as well as my own thoughts on pedagogy. Let's be honest, the goal of teaching is to get things into the student's heads. Those things are a combination of knowledge and tools to deal with the knowledge. I really think that technology is opening new doors in this area. That has given me some thoughts, and I wanted to jot some down here with the hope of getting a critique from current and former students as well as anyone else who has an interest.

The reality is that the general form of my syllabus for most of my programming based courses has changed little in the last 11 years. The details of numbers of assignments have changed and I introduced the "Interclass Problem" a few years back (which I think works really well), but a student who took my courses in 2001 would not notice a significant difference between what they were handed on the first day of their course and what we look at online at the beginning of my current courses. For Fall 2012 I am seriously considering blowing that all up.

Change Can Be Scary
One of the reasons I haven't changed my syllabus much is that change is scary. The general structure has served me well. I've tried some different experiments in courses, including expecting students to read and understand before showing up. That one failed miserably. However, it was something I could easily alter mid-flow because the syllabus hadn't been altered significantly. The syllabus is a contract with students. Once laid out, it needs to be followed. Making serious changes to it isn't something a teacher does lightly.

Of course, change isn't optional. We have to do things to adapt to the new realities of the world around us. I feel like the time has come to make some serious changes to every aspect of my teaching, changes that are substantive enough that it will also alter my syllabus. I want to rework not only how material is presented and interacted with, but how evaluation happens.

No More Lectures
The change I have been considering for a while is to stop lecturing in class. I enjoy lecturing. I think I'm fairly good at it. I've come to realize that it is not the most efficient medium for content delivery, and that technology is providing alternatives. As such, I plan to switch to an inverted lecture where students watch videos and read, then show up to class and code to solve problems. This is similar to my earlier, failed experiment with reading before class. I'm hoping the video lectures can make it successful this time. The real goal here is to spend more time in class having students do things and giving me the ability to critique what they are doing.

An End of Paper Quizzes and Tests
This fall I got rid of the paper "Minute Essays" because Google Forms gave me a better way to do them. I said I would love to get rid of paper quizzes and tests as well, but I didn't feel I could. Lots of my quizzes and tests involve students drawing things out like trees and lists. Even when they aren't required, they are helpful for partial credit. I am starting to think that it is time to look at other ways of assessment and to break free of paper. The goal here isn't really getting rid of paper. That is just a nice side effect. The goal is to improve learning and assessment while making life easier on me as well. Putting all of those together makes things more efficient. As I argued in my last post, I think that education really needs to drive for this goal, less we become the next market to be dramatically disrupted.

Getting rid of quizzes and tests in their current form is going to require some more interesting technology. In this case, Google isn't going to write it for me. Thankfully, I'm a CS professor who loves to code, so I just need a good idea of what the software needs to do and I can write it myself. I don't want to go the route of multiple choice tests. I tried using Moodle style quizzes for Astronomy to "verify" that students were reading. I need something that goes well beyond that. Mainly, I need something that is specific to CS and coding.

I see two main aspects to this software. First, it needs to evaluate code. Evaluating the functionality of code is fairly easy and has been done by a lot of tools before. I don't know of one that currently exists for Scala, but I don't see that as a big problem, I can certainly write one. The other thing that has to be addressed is the drawing. What hit me today is that I can certainly set up an environment that has the types of tools needed for drawing the things I expect students to draw, along with the ability to record not only the final drawing, but the steps in between, so I can replay their steps. For me, that makes this potentially more useful and informative than just paper.

What is more, the same system can inevitably be augmented for exercises and might even be able to do automatic checking of correct vs. incorrect procedure. That would allow me to take the time to focus more on things that are incorrect and work with those students to fix the problems. Again, this is an overall benefit. It makes me more efficient and hopefully improves student learning at the same time.

What About Tests?
I'm still a bit up in the air on the tests. I could certainly use the software just described for that type of evaluation. However, there is a part of me that really wants to do oral exams. I have never done them before, because they don't scale well. That part won't change. However, I picture them happening outside of class time. Imagine setting up appointments for them during the semester whenever you are ready to display mastery of a certain set of skills. I'd also like to find ways to possibly do take-home exams that works well. This is an area I'd love feedback on if anyone has ideas.

Variable Pace: Students Running Ahead?
In an earlier post I mentioned the idea of open ended courses. I think the format described here lends itself to that. If quizzes and tests aren't done in class on specific days, students have a lot more control over when things are completed. I can image some students, especially those with background knowledge, finishing all material early. I can also see students who struggle with one or two areas taking more time on those.

On the whole, I see these as positives. However, there is one thing that I worry about, what becomes the value of attending class? The shared class time has to follow a certain schedule. That would be my expected pace for the course. I would be throwing out problems following that pace. Students who get further ahead typically won't have a need to come to class. (Though their presence could be helpful in having them help teach others.) Those who fall too far behind won't be able to do what we are doing. Maybe that isn't so different from how things work now, other than the fact that now the assessment happens in class and is locked to the normal pace. What would be the implications of reducing the strength of that bond?

If you have any thoughts, please comment. If I am going to jump off into a completely new course format, I want to have lots of input and try to be as prepared as possible for what might happen.

Friday, April 6, 2012

The Education Bubble

It is possible that you have heard of the concept of an "Education Bubble". We have seen a Tech Bubble that burst in 2001-2002. We have also seen a Housing Bubble that burst in 2007-2008. Some people are now predicting an Education Bubble is the next thing that will burst. I didn't really like this term when I first heard it, but recently I have come around to accepting it. This term is used to describe something that I have been worried about for a while, a devaluation of standard college education. This blog post will present my views on this issue and what I think schools, like the one I teach at, need to do to make sure they aren't destroyed when this bubble bursts.

First off, I should explain why I didn't like the term "Education Bubble". I think that a college education is extremely valuable. Studies such as What's it Worth? show that those with a college education earn more than those without, and for many majors, the difference is easily large enough to justify the expense. In addition, those with a bachelors degree are far less likely to be unemployed than those without. In fact, if you look at the plot linked to in the previous sentence, you probably notice that even having college without a degree seems to have moved away from the college degree curve since the last recession.

The reason I didn't like the term was that it implied to me that the value of college education was going to drop. I don't see that. If anything, I feel that the value of high levels of education is going to increase. One thing I do see changing is that just having a degree in whatever you want is no longer sufficient. I'll come back to this point later.

So if I don't think that the value of college level education is going down, why did I say above that I foresee a devaluation of a standard college education? The reason is that I see a rise in the availability and quality of non-standard forms of education that will cost a lot less. As prices are set by marginal utility, the devaluation I see isn't of the product itself, but how much it provides you above and beyond much cheaper alternatives. From the standpoint of what the market will bear when it comes to tuition, it doesn't matter if the pressure is devaluation from above or increased valuation of lower-cost products, either way, tuition costs will have to come down.

Where the Pressure Comes From
The source of the pressure that I think colleges should be feeling comes from new technology that allows education on a much broader scale. One of the more well known agents in this is the KhanAcademy. With funding from the Gates Foundation, KhanAcademy has made some waves. Most people think of it as a tool for teaching elementary kids arithmetic, but if you visit the site you will see they have a lot more going on. The math goes well into the college curriculum with differential equations. They also have more diverse topics like science, finance, and art history. All of these videos are free to watch. It is also free to do testing that verifies what you have learned for many of the topics.

Of course, KhanAcademy isn't really gunning to replace colleges. Other sites like Udacity and Coursera are. Those two sites were born in late 2011-early 2012 out of Stanford, and both offer full courses online. They include evaluation in addition to lectures. Udacity was co-founded by Sebastian Thrun who is probably best known for his involvement in the Google autonomous car project. In the fall of 2011 he offered his Stanford AI course to the world online and over 100,000 students signed up and 20,000 finished the course. A detailed look at this can be found in an article from Wired. There are three significant features I want to point out.

  • Thrun left his tenured position at Stanford to found Udacity. That is how much he believes in this model.
  • Students stopped coming to his lecture at Stanford to watch the videos because that felt more personal to them.
  • "Fifty years from now, according to Thrun, there will be only 10 institutions in the whole world that deliver higher education."

These aren't your standard online courses and things have moved well beyond OpenCourseware.

Speaking of OpenCourseware, MIT has seen this Stanford based "threat" and has answered with MITx. Their goal is to be able to offer certification for completing MIT courses and they want the entire MIT curriculum on there. This type of thing is already being utilized by some people. Today there are questions of acceptance by employers and others, but I don't think it is likely those obstacles will slow things down too much.

Another recent entry into the fray is the Minerva Project. Their goal is a bit different. They want to be an affordable, elite institution that uses material from top faculty from around the globe. In some ways, this shows that people are still feeling about for what will really work. However, it is likely that something will gain traction, and all of these are reducing the margin between what one can get for a low cost (possibly free), and what one gets in the standard college experience.

Current Impact
These new challengers are just starting up, but hard economic times have already forced a number of institutions to make changes. The motivation for change can be seen in declining enrollment and rising discount rates. Basically, it is getting a lot harder to pull in students, especially those who will pay the money that higher education is currently asking for. Some schools are already responding to this by cutting tuition. Even more have chosen to freeze tuition or take other measures.

What it comes down to is that the competition for students and student dollars is getting more fierce. It isn't just an economic problem either. It has moved full force into the political arena as well. State schools have been feeling it for a while as their state funding gets cut. Having the federal government turn an eye to it means that private institutions might have to deal with this as well. Honestly, having state school defunded helps private institutions as it closes the tuition gap. Losing national funding from grants and the like won't be so helpful.

Needed Changes
So what needs to happen? The short answer is that tuition needs to come down. Even if the economy picks back up enough that individuals and families are willing and able to pay higher tuition, The availability of real alternatives at much lower cost or no cost will eventually push people away from standard colleges unless they drop their price.

How much does tuition need to drop? My gut feeling is that tuition needs to drop to almost nothing. Room and board plus the opportunity cost of being in college is probably going to be a significant hurdle even without additional tuition. I can see some tuition as being needed for various psychological reasons, but I can't see it being a big part of the budget for a college or University. That's going to be a real problem for schools that currently get the vast majority of their budget from tuition. I don't see much hope for those schools in 20 years, perhaps even in 10. The timescale depends as much on social momentum as it does on technology changes. Schools with large endowments have a chance to restructure themselves so that their endowment covers the majority of their budget. That isn't going to happen overnight though. That is where the bubble part comes in. Schools which start making changes in advance can restructure their budgets in a smooth transition and survive. Those who fail to do this will have to rush at the last moment to make the changes and I expect most will burst and fail.

Making this Happen
So how can a school make this happen? I can only talk to this in a limited way as college budgets are far from my area of specialization. In the case of Trinity I believe that ~55% of our budget currently comes from tuition. That is a lot of money to shave off. However, if that can be dropped to ~25% in 10 years it would, IMO, be a significant step in the right direction and at that point we would definitely have a clearer picture of how much more needed to be done. For this to happen, tuition needs to drop, but having a bigger endowment would certainly help as well.

So how can a University lop off ~30% of their operating budget in 10 years? I have no idea on the details. My hope is that someone reading this will know more about that and come up with something. However, I can talk to aspects related to faculty and technology.

The bottom line is that we need to be more efficient, and do more with less. This has been a mantra for businesses through most of the economy. However, it hasn't hit education. Indeed, we have gone the other way. We typically push for lower student/faculty ratios. Trinity is now down to 10:1. We tout that  as a benefit in our recruiting. There is no doubt it is a great thing in many ways. There is also no doubt that it is extremely expensive to maintain that ratio. Doing more with less in education really means finding a way to give students the feel of an individualized educational experience with good access to faculty without having 10:1 student/faculty ratios.

In order to do this, we have to re-evaluate how we teach everything. I think we need to take some notes from our upcoming competition as well. If you read the article from Wired linked to above, Thrun mentions that students stopped coming to lecture in favor of the videos. Today's students often consider watching a video in their room to be a more intimate and individualized interaction than being in even a small lecture hall with the professor. This points to one easy change: faculty should not be full time lecturers. We should take advantage of things like online videos, even making them ourselves. Time spent face-to-face with students should be spent using formats that exclusively can't be done with prepared videos, like answering questions, providing critique, and moderating discussion.

In many ways, making students watch videos is not that much different from asking them to read. I believe that the videos work better, but only partly because they are videos. (The dynamic nature of video can definitely help with some topics.) What makes them really successful is having them done in short segments (10-15 minutes is ideal) and having them followed by questions that help a student identify if he/she understood the material. That can be done for reading too, but making it work requires utilizing technology.

That is the other half to doing more with less. Education has been very poor at making good use of technology. Many different approaches have been tried. Few have been really successful in changing the fundamental nature and efficiency of the education process. I feel like we have finally hit the point where that can change. Electronic evaluation gives us the ability to place little sign posts that students have to go past and check off. These don't have to be a primary form of evaluation, just something strong enough to serve two purposes. They have to tell the student if he/she is really grasping the material and they need to allow the faculty member to check that students are doing each of them. The technology exists to do this today. It takes more prep-time, but once created, it is useful for many years and allows faculty to focus more on efforts that are very individualized. That is the one edge a traditional campus has over something like Udacity, and we have to find ways to maximize it.

It is probably already apparent at this point, but I feel that I have to explicitly mention that any movement to reduced teaching loads is moving in the wrong direction. To cut 30% off a University budget is going to require increased teaching loads. Faculty might balk at that, but keep two things in mind. I really am talking about working smarter, not just harder. In addition, if the education bubble does burst and we haven't done this, faculty become unemployed and most will have little chance of re-employment.

The World Keeps Changing
Now to the other topic that I feel is closely related, and which I know many of my colleagues will probably be upset to hear me say. I feel that we are moving out the the window of time where "a college degree in anything" insures a job. In an earlier blog post I wrote about differences between what is happening today and the industrial revolution. In regards to education and jobs, prior to the 1920s, people could safely drop out of primary school and still get employed. When the industrial revolution rewrote the rules of American farms, that changed. Somewhere around the 1950s I believe the rule had become that a high school diploma was what you needed to make sure you had a good job. For all of my life, ~1980-2010, the rule of thumb has been that a college degree, regardless of what it is in, is the path to a good job and a good life. I think we are moving out of that phase.

This is part of the education bubble. This is the reason you have people like Peter Thiel telling students that they should simply skip college. I personally still see a huge value in college education. However, there is a gain of truth in what Thiel has to say. I think it is possible today to get a college education that doesn't serve you well later in life. Regardless of what you major in, you have to make certain you pick up certain skills that will benefit you after graduation. All too often, college students under the impression that "any degree will do" search for the easiest route. I think what Thiel is really pointing out is that the world is constantly growing more competitive and those who seek the easy way out are dooming themselves to failure, even if they complete a college degree.

Maybe I am Wrong
Of course, it is always possible that I am wrong. It is possible that companies will decide that credentials from places like Udacity have no meaning. Maybe colleges don't have to lower tuition or can even raise it and students will continue to walk through our doors and pay whatever we demand. I don't think I'm willing to bet my career on that assumption though.

I would like to thank Bryan Alexander for ideas and some of the links in this post.

Tuesday, April 3, 2012

Value of Skills in 2022

This post continues my thoughts on curricular issues. I said the last one might upset some people. This one could probably cause various things to be thrown at me the next time I walk across campus if many people read it.

The year 2022 in the title was not selected just because it happens to be 10 years in the future, it was selected because Trinity is currently redoing the curriculum with the charge of considering the needs of the graduate of 2022. As this is another post that some might disagree with I repeat my request that if you do, start a discussion. That way we might both learn something.

Skills vs. Content
There are many ways of classifying things that one feels students should learn. For this post I am going to use a skills vs. content division and focus on the skills side. When looking at this division, skills are things like reading and writing. They are separate from content in that they are broadly applicable to many different content areas. Every class should be making you read. Every class should also be making you write to some extent. Content can be divided up into its own set of categories. I am going to ignore those because what I want to focus on is a few possible areas of skills that an institution of higher education, such as Trinity, might want to make sure students have command of before they graduate. I want to look briefly at their value today and then look at how that might change by 2022.

Here are the categories of skills I want to consider:

  • Reading
  • Writing
  • Quantitative/Numeric
  • Foreign Language (Natural Language)
  • Basic Programming (think of this as Foreign Language: Artificial Language)
All of these have the characteristic that they can be valuable in conjunction with many types of content. In addition, they are often challenging to teach without associated content, with the possible exception of foreign language where conversational usage provides an automatic application.

Current State of Affairs
The way things stand today, reading, writing, and quantitative skills start with primary school and continue on through college. In the state of Texas now, all students have to take 4-years of English, Math, Social Studies, and Science in high school. When they go to college, all students continue to read and write, though perhaps with less explicit instruction.

Many students pretty much stop taking quantitative courses. There is generally some minimum core requirement for math and science which will have quantitative elements. Students outside of STEM majors are very prone to take nothing above the minimum for that. Having taught introductory astronomy to classes which are mostly composed of students who are taking it to avoid other courses they perceive as harder, and who will often say they haven't done any math in years, I feel fairly confident in saying that by the time they leave college, a great many students have very poor skills at the level of algebra and above.

Foreign language in the US is not introduced until middle school or high school normally, and most colleges have a minimum requirement for that as well. The minimum requirement for foreign language can be anywhere from 2-4 semesters (that would be 2-4 years of HS study). Depending on the details of how it is implemented, many students will never have to take a foreign language in college as long as they took enough while they were in high school.

Then there is programming. In the US it is not generally even offered before high school. Not all high schools offer it, and even when they do, it isn't required for anything so few students take it. At the college level, it is generally not a requirement except in some STEM majors. (At Trinity, the introductory programming course is required by CS, Engineering, Math, and Physics, it is an option for Biology and Geoscience, and it is not mentioned in the Chemistry degrees. No other department requires it, and it is not a University requirement, though it can satisfy one course in the Common Curriculum.)

The standard explanation for this is that programming is not as fundamental a skill as the others. While I would disagree with that even in 2012, I will argue that in 2022 it could be the most fundamental of these skills after reading and be on par with quantitative skills.

Why Coding Instead of Application Usage?
Some readers might find it interesting that my skill is for programming, not application usage. Indeed, many schools in the US have gone through times where application usage was either required or was at least taught to large fractions of students. While that likely made Microsoft very happy (and I'm sure they donated the software to help make it happen), proficiency in particular pieces of software is extremely non-fundamental. It is something that changes a lot, all the time. How do you pick what software to teach? Why chose one vendor over another for similar programs?

The reality is that knowing how to use a particular application might be helpful in life or even in completing your homework. However, it does not open new vistas in terms of your thinking. The ability to read allows you to acquire knowledge in ways that are completely closed to the illiterate. The ability to do math allows you to approach many problems with a formalism that leads to exact answers in ways you can't do without it. In this same way, the logical formalism of learning how to program opens your mind to new ways of approaching problems. It also gives you access to a completely general problem solving tool that can do things that are impossible for the unaided human.

Teaching a kid to use Microsoft Word is like giving a hungry man a fish. Teaching that kid how to program is like teaching the kid to fish. It gives him/her a new perspective on all problems as well as on what is going on in every program he/she ever uses. Given how much of modern life is spent using computers and software, it is a bit surprising how few people have any clue what is going on inside those magical little boxes. (Indeed, they are magic little boxes to anyone who doesn't have any idea what is going on inside of them.)

Existing Technology and Trends
Of course, technology is ever changing. Computers are getting exponentially faster so the rate of change of computing power is also exponential. What is cutting edge today will be mainstream in less than five years. In ten years you will have small devices running things that would only be possible today on a supercomputer, or which haven't even been written today because there is no market as the mainstream machines can't handle them.

So how is current cutting edge technology impacting the skills mentioned above? 

Reading - This one doesn't even need to go cutting edge. Have you seen a headline for a topic of interest and clicked on it expecting an article, only to find a video? Videos and audio are everywhere. You really don't have to read much to get information these days because technology has made information in others forms fairly ubiquitous. In the case of education, consider sites like the KhanAcademy, where you can pick from hundreds of video lectures. TED-Ed, YouTube > EDU, and many other venues are adding great educational material using the dynamic medium of video.vSure you have to read some words in the video, but that is pretty low-level reading.

At the cutting edge we are beginning to see a real move from standard textbooks to electronic textbooks. The impetus is that the newer forms of electronic textbooks can be highly dynamic with integrated video and other features that bring the contents to life. Sure they still have writing and you still read, but compare the reading that happens in those books to an old textbook you might pick up from several decades ago. The nature of reading has already changed a lot and will continue to do so.

Writing - The impact of technology on writing at the cutting edge today is probably best seen in Narrative Science. This is a company that makes software to write news stories. Here is one article of many that have been written about the company. You can see the products of the program at Forbes, where Narrative science has their own blog. The simple message of this is that writing has been automated. The robotic author is not just part of the future, it is part of the present. It isn't yet general enough to work on everything, but what they are doing works off of nothing but machine readable data. It isn't hard to imagine a program that takes an outline and some basic information from an "author" and produces an essay or short paper in full prose.

Quantitative - Quantitative skills have been part of technology for decades. These days a lot of the instruction for arithmetic is done expecting the use of calculators. Even this has taken significant steps recently with things like the ability to do 3-D plots in Google. Probably the best demonstration of cutting edge though is Wolfram Alpha. This website, set up by the creators of Mathematica, gives you remarkable abilities when it comes to quantitative data. For example, it is a simple matter to get answers for many different mathematical problems, whether symbolic or not. You can even have it look up data sets for you and do math on them, like this search showing the ratio of corporate profits to GDP in the US.

Foreign Language - One of the main goals of knowing a foreign language is to facilitate communication with people who speak other languages. Your smartphone can do that now. There are many different apps that you can put on your phone that will translate your speech to text in a foreign language. Some of the newer work on this includes a Microsoft project that uses your voice to speak the translated text.

Coding - Everything listed above was created by people writing code. You are reading this on a computer that is running an OS that includes hundreds of millions of lines of code using a browser that is code. Computer code/programs are everywhere in 2012. Computing is ubiquitous. There are a number of educational tools like Scratch and Alice, which are designed to make coding more accessible to the novice. However, there really aren't any tools that automate the fundamental process of writing code. 

There are two main reasons for this. First, natural languages contain many ambiguities and code can't. You have all seen examples of English that has been written explicitly to be non-ambiguous. That is what you see on tax forms. It is painful, ugly stuff. So even if you create a tool that goes from English to code, the user still needs to have some basic knowledge of how to help the program remove ambiguities.

The second reason is the halting problem. From the early days, computer scientists have known that there is no way to universally automate coding. This doesn't mean we can't create programs that write other programs as well as humans. This does mean that we can never write a program that can write any other program we desire and be able to demonstrate it is correct.

Where All This Leads in 2022
Math and numeracy are probably the best examples of the fact that just because technology has made it so you don't need to be able to do something, there is still value in knowing how to do it. Calculators have made doing arithmetic by hand obsolete for quite a while now. However, they are garbage in, garbage out devices so if you don't have any idea what the operations really are, you have no feel for the numbers and you don't realize when answers that you get are completely absurd. In addition, symbolic manipulation is probably the more important quantitative skill to have, and it too requires some type of feel for operations for it to make sense. So just because technology can do something, even if it does it better than humans, there can still be a value in humans learning how to do it.

However, the relative values of reading, writing, and foreign language skills are going to take a significant hit in the coming decade as the usage of those skills declines with computers filling in the gaps. To see this, start with foreign language. There are cognitive advantages to knowing more than one natural language and there are advantages to understanding other cultures in ways that are hard to do without knowing the language. In 2022 though, it is likely that a small device will be able to fit into your ear that works like Douglas Adams's babblefish and allows you to hear translated speech of those around you with minimal delay. In effect this means that there is almost no advantage to learning a foreign language in terms of communicating with other people. In addition, I see globalization of markets decreasing for various reasons (which would take a full post to describe, but think 3-D printers and increasing transportation/energy costs) so that will also reduce the impetus for studying a foreign language relative to today.

When it comes to writing, people will need to know how to do it, but I expect a lot of the dirty details will be handled by computers. A human will lay out ideas, and a computer will be able to stitch them together into complete prose. The human can proof them or tweak them, but most of the time probably won't bother. This doesn't apply to poetry or certain other types of artistic writing, but that is a small fraction of what people write today and if anything I see it shrinking, not growing. Narrative Science could probably do most technical writing in under five years. No need to wait until 2022 for that.

Should you even bother writing things if people won't read them? I still expect many things will be read. It will still be a vital skill. However, kids know how to read by the end of primary school. Little instruction beyond that really focuses on the mechanics of reading. At that point reading becomes mostly a tool to use to get information. Even though reading will be needed, it wouldn't surprise me if the amount it is actually done goes down. Why bother reading a static book when other media can present dynamic concepts so much better? What is more, it isn't hard to imagine a Heads-Up Display (HUD) that integrates functionality like Google Goggles that can not only analyze the items you are looking at and give you back information, it can read things to you too. Having it read in your native language might not be something most people want, but having it read things in foreign languages could be extremely helpful.

What about quantitative? This is a hard one for me. I feel like there will continue to be a general need for numeracy, symbolic manipulation, and the general rigor of working within the formal systems that are part of mathematics. This list doesn't explicitly include arithmetic. My gut feeling is that arithmetic is needed for the numeracy, but perhaps someone will find a way around that. The need to do arithmetic is going to decline even further in the future as the ubiquitous computers are very good at arithmetic and they will be always at hand to do it for us. Still, I feel there is a need to understand it at some level so you know what operations connect to different meanings.

Of course, if code is ubiquitous in 2012, it will be far more so in 2022. Does everyone need to know how to write code? No, not everyone, but anyone who wants to be successful in life probably does. This position is probably best laid out by the article "Now Every Company is a Software Company" in Forbes from 2011. My view can be summed up this way, if you think that computers are magic little boxes in 2022, and you have no idea what is happening inside them, you have already lost.

Closing Notes: The Rise of Other Skills
In writing this post, it has occurred to me that while the need for some skills falls, others will rise. One of the current buzzwords in computing is "big data". That is how Narrative Science works today. By 2022 it is possible that everything will be "big data" in one way or another. The sciences are pretty much there today. Politics and economics are too. Digital humanities will pull in the rest of academia. What skills are needed to deal with that? It isn't just coding.

In addition, as reading and writing fall, I see a possible rise of the value of oral communication. The ability to speak to others in ways that entertains and makes your case is important today. It might be absolutely essential in 2022.

What are your thoughts? Let me know what you agree and disagree with.

Sunday, April 1, 2012

Option for 4-4, 2-2 at Trinity

This post is rather specific to teaching at Trinity, but it has elements that might interest others in higher education,. Plus I figure that many of the people who see my posts are former students who might at least find this interesting, and might even want to provide feedback. Warning, this post contains a few of my opinions that might not make me popular on certain parts of campus. I would suggest that instead of getting offended, a reader who disagrees should try to turn that energy into comments that can lead to constructive discussion. Present why you disagree and elaborate. I typically find that I learn most about things when I can have reasoned discussions with people who have very different viewpoints. It makes me think outside of my comfort zone.

Background
Currently Trinity has a teaching load of 9 contact hours/semester and as most classes are 3-hour courses, this leads to what we call the 3-3 teaching load. In addition, students take roughly 5 courses each semester of these 3-hour courses. A number of faculty have pushed for going to a 4-4 course load where students take only 4 courses each semester. To make this work those courses should be 4 contact hours each. (Note that apparently some schools had been basically cheating on this and newer laws make it so that such courses must be 4-hour courses. The schools doing this were often considered good schools that rank highly, which is why they are often cited in discussions.)

The goals of such a change are two fold. First, students are not as spread thin and distracted by so many different topics. Second, faculty can have fewer preps because fewer courses need to be offered. Both can be valid arguments, though I'm not sure I buy either one. For the first one, the reality is that students at Trinity have lots of time to pursue fun activities. Even those taking 18+ hours can generally participate in a fair number of hours each week of various recreational activities. If faculty find that students aren't focused enough on their course, perhaps the faculty need to work on making their courses harder so students have to focus.

As for the faculty, my message is similar. I regularly do overloads above a 3-3. I still find time to do my research, write a 900 page textbook, and do things with my wife and kids. I also have students coming by not only for office hours to get help, but just to shoot the shit and talk about cool stuff. Would I enjoy having less work to do? Perhaps, though as most of my work is self-inflicted, that is a position that might be hard to argue for. However, the reality is that the world is a competitive place and it is only getting more so. For students who don't want to lose their social time I point to Occupy protesters who have degrees and no jobs. You need to be better than them. For faculty, my future post on the "Education Bubble" will hopefully make it clear why I think we need to be working our butts off to make sure our entire institution is still relevant in 20 years. Asking for higher wages and lower teaching loads feels to me like a step in the wrong direction. Instead, I think we need to try to move toward having extremely low tuition and the ability to "live" off of little more than the endowment.

The division on support of the 4-4 seems to be reasonably clear through departments. Departments in STEM (along with Business and Music), which have a lot of hours in their graduation requirements say it isn't feasible. Departments in the humanities and the social sciences with under 40 hours of requirements typically like it. Computer Science is a big major and we barely staff things as is. We also wish we could add more courses to the major, not fewer. You can tell from this where I stand.

Having said all of that, I do have an idea for those who really want to go 4-4.

A Simple Proposal
Seriously, this isn't rocket science. If your department wants to go 4-4, do it. Start teaching 4 hour courses. The normal teaching load will be two four hour courses. Every so often you throw in something else so you have an average of 9 contact hours. (Maymester and the like would help with that last part.)

The only problem with doing this today is that Trinity's class schedule does not support it. We have blocks of time set up for 3-hour courses. So the second part of this proposal is a change in the time blocks. Right now we have 15 normal time blocks with 9 MWF/MW and 6 TR. Here are two alternatives. In both of these the blocks are changed to 4-hours, but I expect many 3-hour courses will still be taught. They just leave a bit more free time between sections.

First, to please the faculty who would like to have a day set aside for things like outside speakers, we do MR and TF with blocks of 8:00-9:45, 10:00-11:45, 12:00-1:45, 2:00-3:45, 4:00-5:45, and so on. Most courses will use these blocks. This leaves Wednesday completely open. Unfortunately, there are only 10 time blocks before 6pm instead of the current 15. That is probably too big a drop for the scheduling to work out, but maybe I'm wrong and it could happen. It could be modified so that Wednesday we allow 3-hour courses until noon. That way there could be MWF courses at 8:00-8:50, 9:00-9:50, 10:00-10:50, and 11:00-11:50 that only allow 3-hour courses. That could cause scheduling nightmares, but it might be enough to make this work, and I have great faith in technology when it comes to scheduling. Computer programs should be building University wide schedules IMO. The way we do it today is silly.

The alternative does not leave Wednesday free at all and looks more like our current schedule. You have TR blocks like those from above on 2-hour increments. You have MWF blocks that have students in class 1:10 each day with 10-minute breaks. 8:00-9:10, 9:20-10:30, 10:40-11:50, 1:00-2:10, 2:20-3:30, 3:40-4:50. (The last two could be modified to MW only to preserve Friday afternoon meeting times.) These six blocks, plus the five from TR give 11 blocks to play with. I still don't know if that will be enough. Going to 4-4 only reduces course offerings by 20-25% even if fully adopted. I am not pushing for complete adoption here. Whether 60% adoption would be sufficient to make the schedule work is not clear to me.

They Better Really be 4-hour Courses
One argument that I have heard for the 4-4 was that students in courses from the humanities and social sciences often need more time to dedicate to a class. I am extremely skeptical of this as my experience is that students wind up spending more time on science and math courses than just about anything else. However, the argument was that when a student gets pressed for time, they skip the reading in the humanities course and do the assignment in the science/math course because they might be able to get by without having done the reading, but they can't get by without turning in something. There might be some truth to that. However, if that is the reason, I don't see the 4-4 being a fix, I see the 4-4 being a path for students to spend less time on academics and more time playing.

If departments do adopt a 4-4, I feel there has to be extreme pressure on the departments and faculty who go that direction to make sure that their courses truly are rigorous enough to warrant four hours of credit.

What do you think? Should Trinity go 4-4? What would be the benefits? What are the shortcomings? What will really happen if we make this transition and only 60% of departments start teaching 4-hour courses?