Computer Science

Outsourcing Homework

Thursday, January 19th, 2006

I only skim the headlines of /. lately instead of getting caught up in the commentary. But one caught my eye: the increase of “rent a coder” programs for CS students who want to fake their way to a degree. (Essentially, students are “outsourcing” their homework.) My favorite comment was the following:

If that’s your approach, why not be a buisness major instead? I mean, if you’re not really passionate about the work, why not pick an occupation that a) pays more and b) is easier to fake your way through?

Heh.

Site cross-pollination – check out h2h

Tuesday, December 13th, 2005

I just wanted to mention that earlier today I finally got hand tracking working on my final project for my Motion Capture / Computer Vision class. You can now connect a webcam to your computer, load up my GTK+ program, and watch boxes with crosshairs follow your hands accurately as you move them across the screen.

Pretty awesome, no? Check out the project if you haven’t yet, its MoinMoin wiki is here. I might post up a video of it in action soon.

It uses a clever skintone detection algorithm across the RGB colorspace, along with clever segmentation of the regions of interest to determine the cardinal direction a hand is moving in and retargeting the box to the new area and running the algorithm again. I am quite happy with the results. It can only get better, but it’s already pretty fun to play with.

User interfaces with GTK+ and Glade

Friday, November 11th, 2005

I’ve been hacking up a user interface for my motion capture/computer vision project called “Hand2Hand,” found here.

At first I was gonna do the user interface in Python and have the image processing done in C, but then I decided that the user interface was simple enough that I should just give GTK+ in “pure C” form a try. Of course, I used Glade, which drastically reduces the amount of annoying code for things like Vboxes and Hboxes and Containers you have to write. In fact, using Glade, interface design becomes somewhat straightforward in C. Which is weird, because C seems like it was never built for user interface design, but the g_signal system makes it easy to catch events that occur in your program, and GTK+ is high enough abstracted that you can do pretty well. I don’t know how well GTK+ scales for large programs (i.e. many dialogs, many lists, etc.)–in that case, I think I’d definitely pick a higher level language.

Looking forward to how this application may turn out. OpenCV looks like a pretty awesome library.

Just For Fun: The Story of Linus Torvalds

Thursday, September 22nd, 2005

For the last couple of weeks, my bedside reading has been this half-biography, half-autobiography on Linus Torvalds. I have to say, however, that the book is like two books mixed into one. Chapters alternate between Linus talking about his life and about big moments in Linux’s history to David Diamond describing modern-day Linus with a kind of forced wonder. Truthfully, Diamond comes off as a sycophant who could care less about Linus’s flaws and positive characteristics, and cares more about molding some kind of “image” of Linus as containing a humility and genius simultaneously. Near the end, I started only skimming the chapters not written by Linus. Diamond’s really not a good writer, either. (Sorry Dave.)

Truthfully, the book kind of pops the lid off Linux and makes you understand it as much less glamorous than say Wired Magazine described it to the public. Linus really just talks about not having a social life, sitting in his room with curtains covering his window, coding all day. Not exactly the ideal role model, I think. Don’t get me wrong, I love the Linux kernel (as much as one can love imperfect software), and Linus made a great contribution toward keeping the UNIX world and UNIX principles alive, but it’s just that I like to think of open source developers as something other than the stereotypical, introverted geek. In fact, much of Linus’s chapters is devoted to his apprehension about giving a public talk about Linux. When I think about the fact that I’ve given three or four of them to date, and enjoy it more every time, I see how different I am from this kind of stereotypical geek.

It also kind of made me dislike Linus. When I saw Revolution OS (a DVD on the rise of open source), the movie kind of endeared me to Linus’s practical nature as opposed to Richard Stallman’s religious idealism. I like idealism, but Stallman is really religious about it. And he’s bitter. Linus, on the other hand, has that great Northern European, “I’m just gonna go with the flow” attitude.

But this book made me realize that Linus is religious is his own sort of way. Included in the book is Linus’s flame war with Andy Tanenbaum on monolithic versus microkernel designs. Truthfully, I’ve studied operating systems and I’m not even sure which design is best, and Linus makes a decent argument of why microkernels end up being just as complex, or more complex than monolithic ones. But what I didn’t like is that in the flamefest, Tanenbaum said that deficiencies in MINIX were due to it being a hobby, and that he had duties as a professor. Linus responded, “Re 2: your job is being a professor and researcher: That’s one hell of a good excuse for some of the brain-damages of minix. I can only hope (and assume) that Amoeba [Tanenbaum’s future OS project] doesn’t suck like minix does.”

This just shows me that Linus really is an asshole sometimes. He states this outright in his book. So now, truthfully, I may like the open source movement, but I think I “at least dislike” two of its most major players (Torvalds and Stallman).

Finally, I think a clip from Tanenbaum’s website points out a nice principal in OS design:

Also, Linus and I are not “enemies” or anything like that. I met him once and he seemed like a nice friendly, smart guy. My only regret is that he didn’t develop Linux based on the microkernel technology of MINIX. With all the security problems Windows has now, it is increasingly obvious to everyone that tiny microkernels, like that of MINIX, are a better base for operating systems than huge monolithic systems. Linux has been the victim of fewer attacks than Windows because (1) it actually is more secure, but also (2) most attackers think hitting Windows offers a bigger bang for the buck so Windows simply gets attacked more. As I did 20 years ago, I still fervently believe that the only way to make software secure, reliable, and fast is to make it small. Fight Features.

I agree. But does a microkernel design actually reduce the overall size of the operating system, or does it just reduce the size of whatever you consider to be the “microkernel”? That is, just because a file system is implemented as a file system daemon talking to a driver subsystem through message passing doesn’t necessarily mean the file system, or driver subsystem, are secure. Insecurity could exist even at the boundaries, no? Not to mention instability.

I think Linus and Tanenbaum have to agree that this debate isn’t an open and shut case. The best kernel is probably one that mixes modularity, a strong kernel/userspace boundary, and some of the fancier features of a microkernel approach, while not sacrificing elegance of design or performance.

Free Coders at NYU

Wednesday, September 21st, 2005

I’m organizing a group of people interested in hacking open source software in a team environment. Right now I’m calling it Free Coders at NYU, and have already set up a wiki and mailing list. This could end up being very cool. Next meeting is hopefully this coming Tuesday.

I set up a mailing list with GNU Mailman (link above), which was decently painless under Debian Sarge. The only annoying thing was utilizing my virtual e-mail address mappings which are stored in MySQL, but I figured out a trick for that.

I’ve already spoken, via e-mail, with an open source developer who works on gstreamer among other projects, Ronald S. Bultje. He has already tentatively agreed to do a talk for us sometime this year.

Amdahl’s Law

Thursday, July 28th, 2005

Gene Amdahl once applied the law of diminishing returns to computation. He pointed out that when optimizing part of a computer program or computer system, one must take into account what percent of the overall task at hand that optimization affects.

I recently read some articles comparing the speed of Python to Java, most of which concluded that about the only place that Java beats Python is in actual interpreter speed (i.e. how fast statements and parsed and executed), and that since Python opts to provide thin wrappers for standard C libraries, Python performance ends up being really good.

A good comparison of the language features between Java and Python can be found online, along with a nice comparison of code simplicity and efficiency.

I think I agree with the first author: Python is a better high-level language, and should thus be used for higher-level tasks, and especially for one-offs. What’s interesting is that a lot of people look at Python, say “Man, Python is slow, I could do this better in C,” but then forget about Amdahl’s Law. If your program is accessing the network, the disk, or any other non-CPU/non-memory resource, no amount of optimization through lower-level languages will save you an order of magnitude on performance. So why waste the programmer time, when it can be done in a few lines of Python?

(I think one forgotten benefit of Python in both these articles is SWIG: if you’re truly a performance-oriented engineer, you can always profile your code, find the bottleneck algortihm/code fragment, rewrite it in C, and wrap it with SWIG so that you can access it as an object in Python. Not hard to do, and potentially huge performance gains. OTOH, you can even write Python-accessible code directly in C, using the same abstractions the Python interpreter uses.)

Finals: Phase I over

Tuesday, December 14th, 2004

Only two more to go.

In other news, my Averatec laptop almost broke again today when I had it on my lap and it almost fell off. I caught it last-second, but this damn laptop is so small that sometimes I don’t even realize how close it is to being off the edge of my lap!

The thing has been working better and better. I got a program called fvcool which can send the Averatec into “low-power mode”, which means that the CPU gets sent HLT instructions and “powersave signals” on idle. The result is that it runs a lot cooler and saves more battery power, but at the expense of “real-time” apps like movie playing and even some MP3 playing, so you can’t have it on all the time.

The only bad thing about this laptop is sadly a trivial thing: the paint job. We all know laptops (except the TiBook/AlBook) are made of plastic under the shiny silver finish, but we wish that that silver finish would never come off. However, Averatec apparently used cheap paint, and so there are “palm prints” under where my palms have rested near the keyboard, as well as already a couple of spots (on the corners of my laptop) where the paint has rubbed off just from getting in contact with things. Kinda sad, because high-quality paint isn’t exactly expensive. Regardless, I’ll just have to paint it myself (or have M help me)… it might be a bit of an ordeal, but I think it’s possible to do. I’d just have to figure out a smart way to cover the LCD, keyboard and touch pad… maybe I can get M’s help when I come to that juncture.

Maybe I’ll switch this laptop’s color to something a little less “blah” than silver–a really shiny black would be awesome, I think.
In addition, I fixed some of my nasty X issues, so now my machine doesn’t hard lock ever. So that’s good too.

Now I gotta get studying for Basic Algorithms… fun fun.

Algorithms and vim

Saturday, December 4th, 2004

I spent about 5 hours today doing algorithm homework/studying. I like that class a little more now, even though some of it is a pain. Divide and conquer and dynamic programming actually are powerful concepts, once you get a feel for them.

Then I came back to my dorm, ate some food, and played with vim for literally 2 hours. I guess that was a waste of time, but I learned so much about this editor. Now I feel I can be twice as productive when I code. Especially with all the stuff I implemented for prototype previewing in my vimrc, and with all the support for ctags vim always had but I never used. Wow, this is one powerful programmer’s editor.

Matt has been telling me that I should use emacs with vi emulation (viper), and then I’ll get access to all that great emacs stuff. Maybe. I have nothing against it, except that emacs seems like an operating system unto itself. Eh, it doesn’t matter I guess, I just need to know one of the two well. They are both portable, and run on all major platforms.

The Human Computer

Friday, December 3rd, 2004

Computer scientists have a lot to learn and realize.

For one thing, computers aren’t the center of the universe. What may be an ideal for computer scientists may not be an ideal for normal people. And very often, computer scientists affect normal people because everyone uses computers (or at least, everyone will).

All these computers sit around on our desks, we only use a fraction of their power at any time. Right now, I am typing this blog entry, using less than 1% of my CPU’s power. Theoretically, it could be doing things–helpful things, things that will make my life easier. It could be doing smart analyses of what I’m writing and try to predict what I’m going to do next. It could be some sort of extension of my mind, helping me produce better work. Instead, it sits there idle, useless. Glorified typewriter.

There have been many innovations, but the “humanity” of computers has been lost. We shouldn’t be designing our lives around them… they should be designed around US. The way the human mind works needs to be complimented. I should not change my ways for the computer, except if in changing them it makes my life easier, less complicated, and makes me more powerful as a human being.

These should be enabling devices. They surely have the potential to be enabling devices. But right now, in many ways, they disable us. We are restricted by the rules programmers place on us. We live under a sort of “law of code” which Lawrence Lessig describes in his books. In addition, the gov’t and other groups seek ways to use computers to control people.

Computers need to become more like us, so that they can seem familiar, useful, but at the same time, a whole lot more powerful. What computers have that we don’t have is speed, time (CPU time), and infinite storage. What we have that computers don’t have is the ability to reason about our experiences in very flexible ways. Wrapping a computer’s speed and storage capabilities around our own flexible abilities as conscious beings would mean a very powerful harmony.

Why are we still talking about how to isolate faults, or make device driver subsystems better, when this human element is so sorely needed, and would be so well appreciated?

Computational consciousness

Saturday, November 13th, 2004

I’m working on this philosophy paper, and am having a bit of a brain struggle. The paper I read makes a very strong point for a model of consciousness that is computational (functional), so that, for example, it is conceivable that a sufficiently advanced computer (or group of computers) could replace the brain and serve the same role (i.e. I’d still have conscious experiences, etc.)… but this is very hard for me to accept at a “gut-reaction” level. Although it would be very easy for me to write a paper defending Dennett’s claim, I am going to have work through this to figure out what is wrong with it (I am convinced something is wrong with it).

A quote Dennett cited (attributed to Fodor) made me laugh out loud and receive stares in this quiet lounge: “If, in short, there is a community of computers living in my head, there had also better be somebody who is in charge; and, by God, it had better be me.”

I have to re-read, and re-read, and outline, and re-read, and maybe, eventually, write.