Archive for October 2009
I recently came across the Ubiquity plugin from Mozilla labs. This project is championed by Aza Raskin, son of the late Jef Raskin of Mac UI fame. The Raskin legacy is a powerful one, and I’ve found personal inspiration on the pages of his site (Update: the articles on Jef’s personal site seem to have been removed).
The ideas embodied by Ubiquity are powerful, however looking at the demo video on the Mozilla labs site I realized that I don’t really care about embedding maps in my emails. Pasting a single hyperlink is like dropping a bomb. By virtue of a pass-by-reference call mechanism that is understood on nearly every modern computing platform, you are able to convey a world of information with just a single line of text. If I want to send a map, I can send a link to Google maps. The most significant advances in usability seem to be ways to convey information with less explicit effort rather than more.
Nowhere is this idea more evangelized than among the Linked Data supporters. Linked Data is a sub-topic of the Semantic Web movement. The idea with Linked Data is that a simple hyperlink is enough meta-information to create powerful data graphs that make small bits of data more useful than they could ever be as standalone information. We are generating more and more data on the web, and much of it is realtime and segmented in nature. These messages have little meaning on their own without a world of context to set the stage for them. Fortunately we have a solution for building context on the web: the hyperlink.
I ran across an article from Gabor Cselle, CEO of reMail and former VP of Engineering at Xobni that touched on some thoughts on email that I have blogged about previously. Here is the relevant text from the article:
The lack of innovation in email is because the underlying protocols suck. If you have a great idea about how to use or display the data in Twitter, all you need to read is the Twitter API docs. If you have a great idea in email, you need to know MIME (the encoder), SMTP (the message protocol), IMAP or Exchange (the access layer), and your email client (the viewer). The email technology stack is huge, wobbly, and antiquated.
Take IMAP: a hugely inefficient, stateful protocol with an ugly message format. State-of-the art in the late 1990s, yes, but if you were to reinvent it today, you could do a much better job.
We need to make it easier to innovate around the mail client. We could rip out everything (maybe save for SMTP) and build a great new stack that allows fast iteration. Make it easier to move the needle in email, and the needle will move.
My take on saving email was to envision things as a web messaging protocol that solves the same problems that email was designed to solve, namely one-to-one and one-to-few communications. The difference is that email would become a Web API that is easy to work with and integrate into other platforms. The problem that email solves is no less relevant today than it was years ago when it was first developed. Twitter, Facebook and Google Wave are not really targeting the same problem. Twitter shows us that when we open up simple APIs, the community will move the needle.
These days everyone has their own take on what `hacking’ is all about. Paul Graham’s seminal essay `Hackers and Painters‘ tries to pin it down, as does Eric Raymond’s `How to Become a Hacker‘. I have even taken a crack at figuring out what it means to me. More recently, Paul Buchheit of Gmail fame posted some thoughts on his blog that I found interesting. The post starts out echoing some ideas on computer and reality hacking that aren’t completely unique, but the following passage I thought was especially interesting:
Important new businesses are usually some kind of hack. The established businesses think they understand the system and have setup rules to guard their profits and prevent real competition. New businesses must find a gap in the rules — something that the established powers either don’t see, or don’t perceive as important. That was certainly the case with Google: the existing search engines (which thought of themselves as portals) believed that search quality wasn’t very important (regular people can’t tell the difference), and that search wasn’t very valuable anyway, since it sends people away from your site. Google’s success came in large part from recognizing that others were wrong on both points.
Thinking about business opportunities in the context of reality hacks offers new insight into why disruptive innovation can be so effective at shaking up entrenched markets.
UPDATE: I’ve added a post that addresses the cause of my issues here.
I was adding a new file to one of my git repositories and was confronted with an error on trying to commit the file:
C:\>git commit -m “adding README”
warning: LF will be replaced by CRLF in README
* You have some suspicious patch lines:
* In README
* unresolved merge conflict (line 17)
* unresolved merge conflict (line 19)
* unresolved merge conflict (line 34)
Reading up a little bit on the way git handles line endings, I turned up this gem in the man page of git-config:
- core.autocrlf If true, makes git convert CRLF at the end of lines in text files to LF when reading from the filesystem, and convert in reverse when writing to the filesystem. The variable can be set to input, in which case the conversion happens only while reading from the filesystem but files are written out with LF at the end of lines. Currently, which paths to consider “text” (i.e. be subjected to the autocrlf mechanism) is decided purely based on the contents.
- If true, makes git check if converting CRLF as controlled by core.autocrlf is reversible. Git will verify if a command modifies a file in the work tree either directly or indirectly. For example, committing a file followed by checking out the same file should yield the original file in the work tree. If this is not the case for the current setting of core.autocrlf, git will reject the file. The variable can be set to “warn”, in which case git will only warn about an irreversible conversion but continue the operation.
CRLF conversion bears a slight chance of corrupting data. autocrlf=true will convert CRLF to LF during commit and LF to CRLF during checkout. A file that contains a mixture of LF and CRLF before the commit cannot be recreated by git. For text files this is the right thing to do: it corrects line endings such that we have only LF line endings in the repository. But for binary files that are accidentally classified as text the conversion can corrupt data.
If you recognize such corruption early you can easily fix it by setting the conversion type explicitly in .gitattributes. Right after committing you still have the original file in your work tree and this file is not yet corrupted. You can explicitly tell git that this file is binary and git will handle the file appropriately.
Unfortunately, the desired effect of cleaning up text files with mixed line endings and the undesired effect of corrupting binary files cannot be distinguished. In both cases CRLFs are removed in an irreversible way. For text files this is the right thing to do because CRLFs are line endings, while for binary files converting CRLFs corrupts data.
Note, this safety check does not mean that a checkout will generate a file identical to the original file for a different setting of core.autocrlf, but only for the current one. For example, a text file with LF would be accepted with core.autocrlf=input and could later be checked out with core.autocrlf=true, in which case the resulting file would contain CRLF, although the original file contained LF. However, in both work trees the line endings would be consistent, that is either all LF or all CRLF, but never mixed. A file with mixed line endings would be reported by the core.safecrlf mechanism.
Since when has it been the domain of a version control system to actually change a file’s contents when it is checked in or out? I know that Linus is quite opinionated when it comes to the Right Way of handling line endings, but this is absurd. I don’t want my version control system to touch that at all. I’m a big boy, and I can figure out how to get my tools and editor to play nice with the appropriate line endings for my system. What I don’t need is the introduction of a possible source of confounding errors that typically take forever to track down. Line endings can be akin to trying to figure out why a multi-line shell script isn’t running properly, when the problem is that there is some extra whitespace after one of the line continuation characters. All hell would break loose if the version control system took it upon itself to adjust whitespace in your files!
I suppose that this is all a non-issue if you run things on Linux as Linus intends you to.
Full disclosure here: I own Palm stock, have a Palm Pre, and overall I think it is a great phone.
Palm as a company has performed nothing less than a feat of death-defiance in its comeback performance this year. In January, its stock was trading at under two dollars per share, and it is up over $17 at the time of this writing. Yes, it has taken on hundreds of millions of dollars in financing to get there, but this shows long-term commitment by investors and that this company is in it to win it now. Palm has executed on its vision of a next-generation hardware and software platform to replace its legacy PalmOS, albeit just in time.
Palm has done a lot of things right. They hired key Apple employees that were involved in product development there. Part of what makes Apple successful is a respect for design and aesthetics. People seem to either get this or not. Palm did what they had to do to hire the people that got it. Palm realized that the future of mobile platforms lies not in proprietary app development, but in web data convergence. The web is turning inside out. We used to ‘log on’ to the web, and now the web ‘logs on’ to our lives. It is always on, and nowhere is this more evidenced than in the mobile sphere. We now carry the internet with us every waking minute on our smartphones. This is a paradigm shift over laptops in that we don’t walk around checking our laptops. Even though we had the potential to open up our laptop at any moment, the barrier was high enough that it was not the same as being always on.
Palm gets this, and this is why the software platform is designed just like the web is. The internal services provided by the phone look just like web services. The apps and software are designed just like web sites. The email client and calendars assume that you have more than one account, and the idea is that everything integrates like a web mashup. This idea cannot be overstated. I think Google has realized this and now we see products in their pipeline such as Chrome OS. I would have expected to see Android devices such as netbooks and mobile internet devices if Google thought that they were on the right track with Android.
Palm understands product. Maybe not to the extent that Apple does, but they understand it better than Google. A phone isn’t just software. The strategic advantage is the software. But consumers don’t buy phones for the software. Palm understands this, Google does not. Google phones are made by HTC, which although successful as an OEM manufacturer, lacks brand recognition and has too many different variants to achieve any significant device recognizability. Carriers use phones as promotional items more than anything else. Sprint and Verizon is touting some small fad variant of last quarter’s hot phone as the next big thing, when in reality they are all largely forgettable. Even Blackberry is falling into this category. Do consumers know the difference between a Pearl and a Storm? I can’t tell the difference anymore between a Blackberry device and another OEM feature phone.
I have been frustrated by email more times than I can count, both as a user and as a developer. And I know I’m not the only one.
As a developer, I’m always running into issues like character encoding standards and multipart mime formatting issues. Anyone who has ever written any sort of service that processes email can attest to the fact that you never know what kind of malformed garbage you might get masquerading as valid email. Add strange and obsolete encoding standards for the actual message body (quoted-printable anybody?) and you’ve got a certain recipe for disaster. And the format of the messages themselves is just the beginning. A simple text-based protocol made sense in the early days of the internet, but now there is too much black magic required to fully conform to all of the ways that email has been extended over the years.
As a user, I have tons of data locked up in email that is hard to search and manipulate. Each email provider has its own interface, and even though standards like POP and IMAP give you access from other clients, there are inconsistencies in which protocols are implemented by which providers and all too often, things like folders won’t be available.
New services like Google Wave are great, but I think that the simple needs have taken a back seat to flashy realtime indulgences. Don’t get me wrong – I have been using Wave since early in the beta program, and I’m extremely excited and impressed by what they have accomplished, but really what I need is better email, not better instant messaging. I have plenty of ways of needling everyone with status, presence and other ephemeral communication. Twitter has been purposed for lightweight commenting and messaging on blogs and user forums, but lets take it just a little bit further and allow ourselves to decentralize it and use it like email.
Why can’t we have a simple, decentralized way of sending messages on the net? I’m envisioning something that has a simple and universal REST interface for retrieving and storing messages. Let’s just admit that HTTP has been far and away the protocol of choice on the net and just embrace it fully for messaging.
Here is the deal: anyone can use whatever fancy-pants client that they want with this email standard, just the same way that anyone can use the web browser of their choice for browsing the web. At the same time, there is a standard web-friendly way of indexing, sending, and receiving messages. The key phrase here is ‘web friendly’. I don’t want to rely on the email server for searches and folders and whatever kind of additional functionality I may desire. We can layer this functionality on top of the basic service if we want to. Using HTTP opens the door for authentication mechanisms such as OpenID and would even allow us to delegate permissions and tasks using OAuth. I could see Atom becoming the universal messaging format for the web. Google is already using it in all of their GData APIs (YouTube, Calendar API, Contacts API, and so on) and even Microsoft, who tried to publish their own competing standard at first, ended up going with it.