Keeping applications open

Just an interesting post I made to OSNews in response to someone saying that IE “starts quickly” while Firefox “takes forever.”

Just to clear up, the only reason IE starts faster in Windows is because IE is technically “always running.” The only thing that has to “start” is creating a Window with an “IE control” in it.

I get the same behavior on Linux by running galeon -s when my X session starts. This runs galeon in “server mode,” which means it’s always in memory, and when I run Galeon (on my laptop, I press ALT+F1 to run my browser), it starts in < a half-second. If Firefox had a similar mode, it could offer you the same thing. As for OpenOffice.org, it's true that the start time is relatively slow. I'm sure they'll get around to optimizing it. Personally, I think the obsession people have with start times on Linux and Windows machines is due to a basic design flaw with most Window managers. Applications should really only start up once; if you start an application multiple times in a day, you're essentially performing redundant computation. The program can sit in memory and if it really is not used in awhile, it will get paged out anyway due to our modern Virtual Memory implementations. In OS X, for example, you can get the same effect as "galeon -s" or IE's "preloading" simply by not quitting an application after all its windows are closed. This leaves the application running, and when you open a new window it will be nearly instantaneous. (Strangely enough, many old Windows/Linux freaks are sometimes "annoyed" by this aspect of OS X, since in the Linux/Windows world up to now, closing all windows of an application is equivalent to closing the application itself).

Real Time with Bill Maher and Conservative Boneheads

There’s a great blog post over at The Liberal Doomsayer, and I provided a reply. Here it is.

Great post. Found your blog via Technorati search for Bill Maher (wanted to see what the blogosphere was saying about his latest show, which I just caught last night).

To call her a bubblehead is right on… I couldn’t believe some of the spin this woman was selling. Do you remember when she said that the reason she can say that Iraqi women are doing better now than before the war despite the fact that journalists and reporters have said otherwise is that “as you know better than anyone, Bill, the media in this country doesn’t always tell us the truth.”

Oh, that’s right. If Iraqi women were doing better, the media would want us to think otherwise! When we on the Left say that the media is distorting the truth, say by presenting White House PR as “the truth” or presenting America in the most favorable light possible, the reason we are able to prove this at all is because we know that press access to the White House is controlled by the White House (duh), and therefore, journalists don’t want to piss off the administration too badly since that might cost them contacts in high places.

Would it offend anyone to provide definitive journalistic proof that Iraqi women are doing better after the Iraq war? Of course not! The White House would love an article like that, and we on the left wouldn’t mind it either–after all, what, the hell, are we spending billions of dollars for if humans aren’t even getting basic rights in Iraq?

But this bonehead Conway really is just a talking head of the right, who parrots what the right-wing machine tells her to say. She is what Paul Krugman recently called “an echo chamber”, who simply assumes that what other people tell her in her conservative circles must be true.

Remember when she mentioned that John Kerry voted against what she called “the body armor bill”? She referred to the $87 billion package as “the body armor bill,” even though FactCheck.org and other actual analysts have thoroughly proven the distortion in this claim (a distortion used by Bush to win the election of 2004). It pissed me off that Bill Maher didn’t call her bluff and instead simply used the equally propogandistic “Well, Kerry fought in Vietnam.”

In reality, the proper response would be to point out that the $87 billion package included $300 million for upgraded vests, yes, but that was a mere 1/3 of 1 percent (i.e. 0.33%) of the actual bill’s spending.[1] Do you think what Kerry voted against was those $300 million, or is it more rational to assume that Kerry voted against the other $86,700 million dollars spent in that bill?

1. http://www.factcheck.org/article155.html

Sadness and remorse for the worst acts of human history

Wow, I worked myself up at this late hour thinking about issues related to the morality of warfare (or lack thereof, as it were), and in particular to Hiroshima and Nagasaki. A particularly naive /. poster (is the adjective “naive” redundant here?) pointed out how we can often forget that “civilians can be enemy combatants,” and he mentions a Mitsubishi plant in Nagasaki, as if that were most casulaties occured in Nagasaki (nonsense of course, since over 100,000 deaths occured in that unfortunate city). He then compares America to a police station and Japan to a “man who runs at the station with a bat,” and concludes that it is therefore “all the man with the bat’s fault.” If that reasoning weren’t pathetic enough, he provides another justification for dropping the bombs: that Japan would have done the same, but to New York! Ah, the things I could teach the average /. writer about argumentation. I really hope these aren’t the same folks I meet in the workplace of my future. Continue reading Sadness and remorse for the worst acts of human history

Gangs of America: The History of Corporate Power

I am totally engrossed in this book at the moment. My Dad gave it to me to read, and I flew through about 100 pages today while allowing the aforementioned backup processes to run.

Among other such gems you discover in this book are these facts:

  • The Boston Tea Party wasn’t so much about taxation without representation or hatred for the British crown as it was about anticorporatism. Colonialists were worried about the East Indies Trade Company moving into the colonies and taking their business. Colonists used to see “globalization” for what it was even back then, calling the East Indies Trade Company a vile institution which “enslaves one half of the human race to enrich the other half.”
  • The founding fathers were thoroughly against the idea of the corporation, and thought that large monied enterprises were the greatest threat to democracy, as they could subvert the political system if they were not placed in check.
  • Even Abraham Lincoln and Thomas Jefferson saw these threats, and they were themselves supported by Adam Smith, the economist whose theories are nowadays oft-used in justifying corporate existence.
  • During the days of robber barons, one man essentially created the modern corporation by lobbying the government for the right to intercompany ownership, namely one corporation owning stock in another. Through this law, he established “holding companies,” whose only purpose was to hold stock in other companies. And via holding companies, he was able to take over other corporations and place his corporations outside of any regulation by the state governments. Furthermore, this same man whose foresight gave him great wealth, also provides a nice historical example of corporate greed that is unchecked by government power: he managed to buy up newspapers to fire editors who didn’t print what he liked, and he managed to buy politicians by offering them posts on the board of his major corporate entities.
  • Corporations were not always this way. Corporations do not have to be separate legal entities, completely unaccountable to any of its investors, able to integrate across industries by gobbling up other corporations, able to subvert democracy through political contributions, and able to ruin people’s lives through “externalities.” Once upon a time, American society and American government knew corporations were dangerous, and knew they needed to be carefully monitored and controlled. What happened?

I hope this book answers that last question.

GNU ddrescue and dd_rescue and dd_rhelp, what the?

Wow. I hate when shit like this happens.

Apparently there are three tools out there to help with the same thing. First, there’s dd_rescue, the tool I was using earlier (which ships with Ubuntu in a debian package called… ddrescue). Then, there’s dd_rhelp, a shell script which is a frontend to ddrescue and which implements a rough algorithm to minimize the amount of time waiting on bad block reads.

Then, there’s GNU ddrescue, which is a C++ implementation of dd_rescue plus dd_rhelp.

I only just realized this and so now I’ve compiled a version of GNU ddrescue to pick up my recovery effort. It’ll probably help with one of the partitions that seems particularly messed up.

So far the nice thing about GNU ddrescue is that it seems faster, and more responsive. Plus, it has a real logging feature, such that if you enable it and then CTRL+C the app, you can restart it and it’ll automatically pick up where it left off.

UPDATE: wow, good thing I switched. GNU ddrescue is significantly faster just in terms of raw I/O performance. I jumped from 4GB of this partition being rescued (which took 30 minutes with dd_rescue) to 6GB in the last ten minutes. It seems at least 3x faster. I also like that the GNU info page describes the algorithmic approach in-depth.

Fried hard disk ruins weekend

So, one of my employers ended up with a fried hard disk, for the second time in a row. The main reason is that the PC this HD is contained in sits in a corner with little-to-no airflow.

In order to recover the drive, I am actually taking a different approach from my last recovery effort, mainly by necessity. This disk is seriously damaged–lots of bad sectors, and its partitions are not readable by any NTFS driver, be it Microsoft’s or the open source one. This makes simply using the wonderful R-Studio tool I used last time currently impossible, due to the fact that it won’t even see the drive properly within Windows, and will hang all over the place.

Indeed, what I needed to do is drop down a layer of abstraction: away from filesystems, and into blocks and sectors. Unfortunately, in the Windows world this drop down is difficult, so I had to use my Linux laptop to make this jump.

I found a wonderful tool to help me out called dd_rescue, which is basically a dd with the added features of continuing on error, allowing one to specify a starting position in the in/out files, and the ability to run a copy in reverse. These features allow one to really work around bad sectors and even damaged disk hardware to get as much data as possible out.

Unfortunately, the use of this tool was encumbered by my laptop’s relatively simple bus design. Apparently, if I stuck two devices on my USB bus (like two HDs I was using for this process), the bus would slow to a crawl, and the copy would move along at an unbearble 100kB/sec. I tried utilizing firewire and USB together, but got only marginal improvements. What befuddles me is that in the end, the fastest combination I could come up with is reading from the Firewire enclosure with my laptop and writing to the firewire enclosure of my desktop across the LAN utilizing Samba. Very strange indeed. Now my performance is more like 6MB/sec, factoring in all the breaks dd_rescue takes when it encounters errors. I have 6GB of the more critical partition written, but it’ll probably take a couple hours to have a big enough chunk that I can test R-Studio’s recovery of it.

The only reason I’m even writing about this is because I find it hilarious how many layers of abstraction I am breaking through to do a relatively low-level operation. Think about it:

  1. My broken IDE drive is converted to Firewire by a Firewire-IDE bridge.
  2. My Firewire PCMCIA adapter is allowing my notebook to take in that connection.
  3. The Linux kernel is allowing firewire to be accessed via various ieee1394 ohci drivers.
  4. The Linux kernel is abstracting the firewire disk as a SCSI disk, using emulation.
  5. The SCSI disk is being read by dd_rescue and written to a file, which exists in the path /mnt/smb/image/sdb5.
  6. That path seems local, but is actually a mount point. That mount point seems physical but is actually handled by a Samba driver.
  7. The writes by dd_rescue to that image file are being sent through the kernel’s TCP/IP stack, and flying through my switch, and being accepted by Windows XP’s network stack.
  8. Windows XP is writing that data to an NTFS drive, which is itself connected by a Firewire-IDE bridge (and therefore all the above steps’ equivalents for Windows apply).

I am surprised with that many layers, that this copy is even working. I really should have just taken a machine apart and connected these drives directly by IDE, to save myself a few layers.

Cindy Sheehan smeared by O’Reilly

I really would expect nothing less of my unfortunate neighbor, Bill O’Reilly. Apparently on last night’s show he smeared Cindy Sheehan, the grieving mother that’s been glavanizing the Left as of late, on his wonderful show, the O’Liar Factor. Apparently we still live under McCarthy, where it’s not who you are, but with whom you associate, that determines whether you are a “radical,” or “commie bastard.”

How do people still watch his show?

On the security of an e-mail address

I was just looking at my strange contact page, where I list my e-mail address using a sort of obfuscated string with _ and * characters mixed in. And then I saw someone’s e-mail address listed on the web with the following format:

user () domain ! com.

At that point, I started to think about all the other variations of this spam-protection trend I’ve seen, like user ///at\\\ domain ///dot\\\ com, and I realized that many of us are taking the wrong approach. Myself included. For example, the one above could easily be found by knowing the common TLDs and working backwards from there. If I find a “com”, “org” or “net,” and then look at the string tokens which occur before, I can assume any string of valid characters (say, alphanumeric characters) which is followed by whitespace or invalid chars (like parentheses and exclamation points) can be taken as a valid part of the address. From there, we can easily split user () domain ! com into its proper parts, and construct the e-mail. This same approach works for say, user ///at\\\ domain ///dot\\\ com.

So what I realized is perhaps it would be better to insert other e-mail addresses in there that might get picked up as part of an e-mail address, even in a heuristic scan. For example,

user __at__ domain :: NOT [email protected] :: __dot__ com

That seems more secure to me 😉 Another approach is just to prevent the TLD from being a complete token. This is the approach I took. Turn com into c_o__m or something, and you’re less likely to get picked up in a scan that is searching for “com”.