Dan Newcome on technology

I'm bringing cyber back

Archive for January 2010

Failure is not the goal

with 3 comments

“Fail often” is a recurring mantra among many startup thought leaders, but there is a disconnect between the meaning of the phrase and the semantics that it is intended to convey in the context of startups that has been grating on me.

I understand the intent of statements like this, but I really think that the oversimplification of the creative process is starting to hurt us unconsciously. Mindless failure is not productive, and since our mantra is to fail, it makes it ok to give up too easily. What we should be saying is that we should try things that could fail more often–be unafraid to fail, which does imply that we will fail more often, but I think is better connected to what we are really trying to achieve. When I sit down to play guitar, I don’t say “I want to play more bad notes”, but I know that I won’t learn more challenging material without doing so in the process.

Seth Godin said “good ideas come from bad ideas”. Although I reject the notion that an idea is either good or bad intrisically, the underlying theme is that ideas that work are necessarily a subset of all ideas that are on the table. It turns out that the most effective–and in many cases the only–way of finding the ones that work is to try them all.

I would say that it might be wise to try out only that have some hope of succeeding, but it seems overly optimistic that anyone can achieve this without inadvertently killing some good ideas in the process.

The goal is not to fail–the goal is to learn.

Written by newcome

January 28, 2010 at 10:59 pm

Posted in Uncategorized

Thoughts on Apple’s new tablet: the iPad

with one comment

I’ve been writing about tablet devices on this blog, so I wanted to put my initial thoughts out there on today’s announcement of the iPad. I wrote earlier that in order to succeed where others had failed, a new tablet device would have to re-invent the UI completely. What I failed to notice was that Apple had already done that: with the iPhone! All of the research and development that Apple has poured into the development of the iPhone interface applies roughly to touch interfaces of all sizes. The first innovation was subtle: dropping the pen. Early on in the mobile device market, the pen seemed to be the only logical way of interacting with a touch sensitive device. Fingers obstructed the view, and were low resolution (especially if you have bigger hands). Of course, Apple solved this by changing the UI instead of trying to solve the problem with the input device.

I fully expected to see no pen interface and no handwriting recognition natively, and so far this seems to be the case. One thing that I didn’t know was whether the iPad was supposed to be a full replacement for a laptop. The answer according to what I’ve seen of the keynote so far is no, it is not. A key takeaway is that Steve, in his introduction of the device, asked the question “what comes between the iPhone and an laptop? The answer to that question would have to do some things better than either device to justify its existence. Confusing the issue slightly for me was the announcement of the iWork office suite for the iPad. Given the focus of the tablet on not being a laptop replacement, I’m surprised that this made the cut. However the logic might be that in order to bridge the gap effectively, there had to be at least some level of compatibility with native document formats. Interestingly Apple’s story with the iPad is much different than Google’s story with ChromeOS. Apps rule the day still on the iPad rather than embracing the web fully like ChromeOS. I predict that this will change eventually, but Apple is obviously doubling down on the App Store in the meantime.

Overall, the iPad is more similar to an oversized iPhone than anything else. However, it has a less focused personality than the iPhone. Remember in the iPhone keynote Steve hammered home the point that the iPhone was three things: an iPod, a phone, and an Internet mobile communicator. By Jobs’ own description, the iPad is an eBook reader, a movie player, email device, gaming device, and more. The iPhone can do these things as well, but Apple is very good at focusing consumer attention so that they understand the device better. I think that if there is one failing of the iPad pitch, it is that I’m still not convinced that it does one of these things so much better than what I have that I need to run out an buy an iPad to do it.

Written by newcome

January 27, 2010 at 2:32 pm

Posted in Uncategorized

Source control in the cloud

leave a comment »

I’ve been experimenting with using GitHub for a few small projects that I have in mind, and I just had a minor revelation about having source control as a web service. I checked in a file and realized that I didn’t finish commenting something important in the header. Ordinarily I would have gone back to my development environment and made the change and checked it in. However, while viewing the file on GitHub I noticed that I could edit the file online in the browser, with nice source highlighting and all.

Given that I’ve set up countless source control servers over the years it seems as though this kind of thing would be minor, but it really just cemented a few things for me in terms of the impact that better tools for source control might bring in the future. Although I typically install some kind of web server for online viewing of the source, I’ve found many of these tools difficult to set up and even more difficult to use for anything other than basic source viewing.

I’ve been a Subversion user and proponent for several years now, having moved out of a dark period with SourceSafe some years ago. I was able to convince my former employer at the time to migrate our codebase out of SourceSafe and into Subversion (using the vss2svn conversion script).

For all of the hype that projects such as Mozilla’s Bespin project have been generating about running your IDE in the cloud, I can see that the real importance of putting tools in the cloud is the ability to do things like managing builds and running tests — all of the things that you do on a server in your own environment anyway.

Beyond convenience, using services like GitHub could provide a way for the members of your team to collaborate on your codebase informally and experimentally. Forking the codebase is easy, and there is better visibility as to what the fork is trying to accomplish — there have been many times wherer I would see a branch in one of my SVN repositories where I didn’t know what the branch was for right away.

Written by newcome

January 25, 2010 at 1:50 am

Posted in Uncategorized

Stereo imaging for tutorial images

leave a comment »

I was watching a Discovery Channel interview with James Cameron about the filming of Avatar. One thing that struck me was his comparison of shooting in 3D to shooting in color. When color film was first introduced in cinema years ago, studios didn’t stop shooting black and white overnight, there was a period of overlap. However, there was a watershed moment for color films: the introduction of color television. Once color TV was popular, it was clear that the studios had to produce color films if they wanted to show them on television. This single move ushered in the era of color films, and now we only see black and white in the case of the occasional special effect. Cameron posits in the interview that 3D is likely to follow a similar trajectory. If some development in home theater or television enables casual viewing of 3D video, the norm for the movie industry will be to shoot in 3D.

Extrapolating these ideas a bit, it doesn’t seem unreasonable for us to be taking stereoscopic still images in the very near future. Searching around I’ve seen a few hacks that take a pair of digital cameras to create a stereoscopic still camera. I just wonder what the watershed moment will be for stereo images. Currently special techniques are needed to view stereo images, so their usefulness is currently limited. However, if we end up using some sort of stereo eyetap heads-up display interface for viewing digital media in the future, it could come to pass that we expect images to be shot in 3d.

What does this mean for tutorial images? I think that this could be really great for DIY and hardware hacker articles. I’ve been planning on writing a post on taking good pictures of things like electronic devices since I’ve found it to be a lot harder than I expected, but that will have to wait for another time. But in the meantime I’ve been thinking a lot about how to get better images for my teardown and modification articles. The biggest issues that I’m having revolve around lighting, focus, and depth-of field. However, choosing the angle of the shot is very hard in certain cases, such as when you are trying to show the position of an assembly relative to its neighbors. In cases like this 3D images would be a fantastic tool in my photo arsenal.

Written by newcome

January 24, 2010 at 4:40 pm

Posted in Uncategorized

Is the cognitive surplus real?

leave a comment »

I re-read some of  Clay Shirky’s writing about the idea of cognitive surplus recently. While the ideas are powerful and well researched, I still have some misgivings about the value of the supposed cognitive surplus that was soaked up by television over the years and is increasingly channeled now online.

While television is widely derided as a intellectually vapid activity and the Internet is somehow clear of such a stigma (for now), my experiences online point to the idea that online pursuits vary widely in their general worth to society. Take Wikipedia as one extreme end of the spectrum and something like Perez Hilton as the other. At its worst, the Internet can offer just the same cheap thrills and mindless entertainment that television did.

Now that we’ve established the variance in online activities, what do you think the distribution is going to look like among television defectors? I’m willing to bet that it isn’t going to be skewed toward high-value activities. The cognitive surplus is only going to be real if people are motivated to share their productive gifts with society via the Internet rather than use it as a passive sink just like television.

Written by newcome

January 17, 2010 at 4:07 pm

Posted in Uncategorized

Recording webcam videos with VLC Media Player

with 121 comments

I have been recording short videos using the webcam on my laptop using a trial version of some video software that I found on the net. I had also been using the free Yawcam to snap stills, but I didn’t figure out how to get it to record video. It apparently can periodically save still frames or stream over HTTP, but what I wanted in the end was an .mpg file. I searched around the net to find an open source program that would record video from my webcam but I came up empty. Cheese seems like a good option under Linux, but my laptop is running Windows right now, so that doesn’t help me. If anyone knows of something let me know in the comments. It’s probable that one of the open source nonlinear editing programs is able to do this, but I don’t know how to do it.

I’ve used VLC media player to play videos on Windows and Linux for a long time, and in my search for webcam software found that it can supposedly record video from a live source, so I decided to give it a try. The tutorials that I found were mostly outdated, so it turned out to be pretty frustrating to get working, which is the primary motivation for writing this post. Hopefully others will be able to get this working on the current version of VLC (1.0.3 at the time of this writing) more easily than I was able to.

Just a warning, I haven’t gotten this to fully work the way that I wanted using the GUI yet, so the final solution presented here will be a command line invocation of VLC. It turns out that this is more convenient since there are a lot of tedious steps to go through that are completely automated when using the command line.

Foreword on VLC

Unlike many video programs on the Windows platform, VLC does not use any external codecs or filters. It is completely self-contained. This provided a major source of confusion for me initially, as I was looking around endlessly for the Xvid codec that I wanted to use only to find that it was never detected by VLC.

Even though VLC is self contained, its functional elements are arranged into what the VLC authors call modules. This is important to understand when trying to chain together the functions that we want on the command line. The most helpful synopsis for me was found here, and I’ll put the general form inline here for reference:

% vlc input_stream --sout "#module1{option1=parameter1{parameter-option1},option2=parameter2}:module2{option1=...,option2=...}:..."

The commandline shown above is for Linux systems, but the important thing to notice is that the first module is referenced using #module and subsequent  modules are referenced using :module. Also, options to modules are enclosed in curly braces {…} and may be nested. Nesting will be important when we try to split the stream so that we can both record it to disk and monitor it on the screen during recording.

I noticed some inconsistency in the documentation that I found concerning the argument formats that are supported on various platforms. For example –option param syntax is not supposed to work on Windows, but it appears to in most cases.  We will adhere to the Windows –option=param form however.

VLC is also very flexible and consequently is complicated when it comes to setting up all of the options required to create a seemingly simple mpeg stream. I never knew about different mpeg container formats for network broadcast vs local media (PS vs TS) before this, and it is debatable that it is that useful unless you are into video pretty heavily. You won’t need to look at this to do follow what we are going to do here, but it was an issue when I was trying to figure this out, so if you go off the beaten path there may be more to figure out than you think.

Some of the codecs are very strict about the options that they will take, and you won’t get detailed information about what went wrong unless you have enabled detailed logging. This is covered in the first part of this tutorial. One such gotcha that hit me was that mpeg-2 only supports certain frame rates. The VLC codec adheres to these restrictions rigorously, and if a valid frame rate is not specified you will get a cryptic error about the codec not being able to be opened. Similarly, if no frame rate is specified VLC will not default to something that works, so you have to figure out what went wrong on your own.

Building the commandline

Invoking VLC is as simple as running vlc.exe. However we would like to turn on some extended logging while we are trying to get our options set up correctly. Otherwise issues such as the encoder failing to open will not be easily solved since we won’t know exactly what is going wrong.

The very first thing we should try is to make sure that we can open the webcam with extended logging enabled. The webcam device on my laptop is the default device, so we can open it using dshow:// as shown in the command below. We turn on logging using the –extrainf option with the maximum level of verbosity specified using the -vvv flag. A small warning: mute the microphone on your computer before running the following since you might get a feedback loop that is pretty loud. We will fix this later by using the noaudio option to the display module.


c:> vlc.exe dshow:// --extrainf logger -vvv

If all goes well you should see a VLC window showing the output of your webcam. The only thing left now is to transcode the video stream into mpeg-2 and save it to a file (all while showing a preview window), which turns out to require some VLC module gymnastics.

Transcoding

The main task that we are trying to accomplish is actually transcoding the stream, which is the term for encoding the stream as mpeg to be saved to a file. The output of the webcam is in an uncompressed format, so we need to run it through a codec before we can save it to disk. The following command uses two different modules: transcode and standard. Transcode lets us create an mpeg stream and standard lets us package it into a container and save it to disk. This seems pretty straightforward, but there are some voodoo options here that I saw in the examples online but didn’t find very good explanations for. Setting audio-sync for example. Do we ever want un-synced audio? The important part that seems to be left out of many examples is the setting of the frame rate and the size. Failing to set the frame rate using the fps option caused the encoder to fail for me. Failing to set the width caused problems later when I tried to preview the video stream during recording.


c:> vlc.exe dshow:// --sout=#transcode{vcodec=mp2v,vb=1024,fps=30,width=320,acodec=mp2a,ab=128,scale=1,channels=2,deinterlace,audio-sync}:standard{access=file,mux=ps,dst="C:\Users\dan\Desktop\Output.mpg"} --extraintf=logger -vvv

Monitoring the stream

Using what we have so far will get us a stream on disk, but we can’t see what we are doing on the screen. Fortunately VLC has a module called display that will let us pipe the output to the screen. Unfortunately we can’t do that without also using the duplicate module to split the stream first. Using duplicate isn’t too complicated, but it took me a little while to find out how to use the nesting syntax that is needed to get it to work. The general form of the duplicate module is:


duplicate{dst=destination1,dst=destination2}

Where destination1 and destination2 are the module sections that we want to send the stream to.  The only confusing part is that we have to move our standard module declaration inside of the duplicate module definition like this:


duplicate{dst=standard{...}}

Once we have this form, we can add other destinations like this:


duplicate{dst=standard{...},dst=display{noaudio}}

We have added a second destination to show the stream on the screen. We have given the option noaudio in order to prevent a feedback loop since by default display will monitor the audio.

My final command looked like this:


c:> vlc.exe dshow:// --sout=#transcode{vcodec=mp2v,vb=1024,fps=30,width=320,acodec=mp2a,ab=128,scale=1,channels=2,deinterlace,audio-sync}:duplicate{dst=standard{access=file,mux=ps,dst="C:\Users\dan\Desktop\Output.mpg"},dst=display{noaudio}} --extraintf=logger -vvv

I put the command into a batch file, and now I can create an .mpg file by running the batch file. Some possible improvements could be to parameterize the file name and perhaps allow for setting the bitrate, but for now this suits my needs perfectly.

Written by newcome

January 17, 2010 at 12:05 pm

Posted in Uncategorized

Cycle time of an online community

leave a comment »

I have taken part in many online communities over the years and I have noticed that no matter what, there are certain cycles that tend to happen. There have been articles written about how sites such as Reddit or Digg change longitudinally over time, but one thing that I’m most interested in is the steady-state  ‘cycle time’ of the community.

I would loosely define the cycle time as the length of time it takes a new member of a community to be fully exposed to the range of content and activity that will likely ever occur in the community. Inevitably at some point the new user will start to see mostly repeat topics.

I can go back to some of the music production forums that I was on nearly ten years ago and find people asking the same questions. When I was active in the community though, some things cycled very quickly and other things took a really long time to fully cycle. It was not even apparent to me at the time that I had fully cycled through the community experience. Looking back it seems more obvious though.

I’m not sure how to expand on this idea yet, so I will leave this post as-is. Hopefully I’ll revisit this one though.

Written by newcome

January 15, 2010 at 6:35 pm

Posted in Uncategorized

Zero an old hard disk using dd

with 2 comments

Any time I get rid of a hard disk, I always overwrite the whole drive with zeroes. I know that this is not a secure practice if you are going to be selling the drive, but since the drive is going to the computer recycling center and the data isn’t a matter of national security, a quick wipe should be sufficient. If you want to resell the drive I’d recommend something like DBAN which will overwrite your data properly so that it cannot be retrieved. Practically though, zeroing a drive is enough to keep most people from retrieving the data. A drive that is on the heap with hundreds or thousands of other drives isn’t likely to be scrubbed for data anyway. I could be wrong on this, and anyone in the drive recycling business can chime in and enlighten me, but most of it probably gets shredded for scrap right?.

I use a cheap USB IDE/SATA hard drive converter to plug the old drive into my computer and then boot the computer with the Knoppix GNU/Linux distribution. Once I’m logged in, I use the following command to overwrite the whole drive with zeroes:


dd if=/dev/zero of=/dev/<drive> bs=1M

Replace <drive> with the device that represents the disk to be zeroed. Using the `dmesg’ command is helpful in determining the device name of a removable USB drive.

To check the progress we can open up another terminal window and do this:

$ while ( true ); do { kill -s USR1 <pid>; sleep 5; } done

Replace <pid> with the process ID of the dd process that is running in the other terminal window. This will cause the running `dd’ command to report its progress every 5 seconds to the terminal that it is running in.

This technique could be extended to use /dev/urandom to write random data to the drive also, but generating random data slows things down significantly on my machine and I don’t want too many more excuses standing in the way of getting rid of stuff that is taking up space in my office!

Written by newcome

January 14, 2010 at 8:51 pm

Posted in Uncategorized

DIY wins the day

leave a comment »

In the mid-90s, I was a college student, newly relegated to a dormitory room after having had ample room for my music endeavors at my parents’ home several hours across the state of Pennsylvania.

What I lacked in space, I made up for in newly-acquired access to the wonderful World Wide Web of information. You see, having a dorm room ethernet connection was my first link to the world outside of single-user bulletin board systems that I used to dial into in high school. Those boards had something called `email’ that was sent over the `Internet’, but the power of such things were masked to me because they were hidden behind the disconnected nature of the dial-up bulletin board.

How does this relate to making music? In high school, I was very involved in recording bands using the best gear that I could afford. This included an enormous Tascam 38 1/2 inch tape machine and associated mixing desk, along with DBX noise reduction units, snakes, and requisite wiring harnesses to make the whole thing work. I learned by trial and error on this cumbersome rig using time-consuming tape handling and splicing techniques. During this time, I became aware of many independent music labels and bands embracing the DIY or `do it yourself’ ethos of recording music. DIY was something I could certainly relate to, because that is exactly what I was doing! However, I was missing an essential idea that was espoused in the burgeoning DIY scene: I was trying to hard to be `good’ — to be perfect.

The idea of not obsessing about the technical details of the recording was so endemic in the DIY scene, that it had its own term: lo-fi. The way that the term was bandied about didn’t sit well with me. Of course the idea of making records was to make it sound like a `real’ recording — like something that you would hear on the radio — something that would separate you from the amateur recording engineer.

Being separated from my beloved recording rig, I sought a new outlet for my recording urges. Luckily, I had gravitated to recording-oriented Usenet groups and found what I thought was a good temporary solution: the lowly 4-track cassette recorder. I would buy a cheap unit and when I would get home for the summer I would do `real’ recordings on my `real’ equipment.

Little did I know that buying a 4-track would change the way I thought about recording music forever. I became fearless. Having constant access to a recording device was intoxicating. I quickly filled up tape after tape with reckless abandon, owing to the fact that tapes were so much cheaper than 1/2 tape reels that I was used to, that it almost felt free. There was practically no set up time. I could record anything I wanted to anytime. If I had a riff in my head I could record it in seconds rather than rolling out tons of equipment and spending hours setting it up just to get to the first step of actually getting a sound on tape.

I was now recording anything and everything. I began building a corpus of sound bites that I would go back to for more  inspiration. This iterative process of coming up with ideas did not exist for me when the barrier to entry was high just to commit something to tape. As profound as this change was, it didn’t hit me fully until later, when I got back home to my big recording setup.

As personally liberating as the 4-track was, I didn’t see the same reactions in other people. Not yet. Lo-fi music remained an underground phenomenon, albeit an influential one, but self-recorded, self-released music was regarded as inferior by most people.

Technology has a funny way of accelerating things in ways we don’t fully understand until we are profoundly affected by the change. We are in control one minute, and hurling toward an unknown destiny the next. Take the music industry for example: technology worked in its favor during the heyday of the Compact Disc, but was its undoing in the era of the mp3. Both technologies were digital distribution forms, but the music industry miscalculated how long it could continue to milk the CD cash cow before turning its attention to digital downloads. Mp3s were good enough to be disruptive but not good enough for the incumbents to take notice until they were reeling from the impact.

Fast forward to today, where we live in a world of mass-market products and cheap goods. Cost of distribution is approaching zero for many things, and on-demand production is a reality. Authenticity is the new scarcity. Sites like Etsy thrive on people’s desire for handmade products.

Similar sentiments are emerging in the world of web technology. Standards are essential to enabling communication on the web, but complexity is the enemy. Standards that are too complicated are difficult to implement correctly, limiting its effectiveness. The idea of worse is better, is not so much that worse is the goal any more than lo-fi was the goal of indie recording artists. It was simply a side effect of a mantra to reduce complexity and increase communication, whether it is creative ideas in the form of music or code.

Written by newcome

January 14, 2010 at 7:15 pm

Posted in Uncategorized

Do experts teach best?

with 2 comments

I’ve taken the title of this post from one of the sub-topics in this article about learning to learn. The article makes several good points and is certainly worth a read, but I want to focus on one question that was brought up near the end of the article: who should teach?

The article’s argument pointed to the fact that an expert on a subject may be blind to things that a student needs to know. To the teacher some things might be taken for granted, so they aren’t communicated well. On the student’s side, enough insight may not be present to even articulate the need. Beginners need to see the process, not perfection.

When I was taking music lessons in school, I had a private teacher that was also a teacher at the school that I attended. Fortunately for me, he lived close by and my parents were able to afford to pay his private lesson rates. I made significant progress under his tutelage, but it wasn’t until much later that I  learned that there was more potential for me to progress in my studies than I thought.

Fast forward several years, and I found myself in a position to informally teach a student who was a friend’s younger sibling. The family was happy to compensate me for my time, which I gladly accepted. The compensation was much lower than what an accredited music teacher would have earned, but since I lacked any credentials, the arrangement was certainly appropriate.

Upon hearing about my student’s informal arrangement, her teacher (who was also my former teacher) expressed concern that she should not be taking lessons from anyone that was not a certified music instructor. Ordinarily that, as they say, might have been that. However the teacher noticed that the student had been progressing faster than her peers and the student cited being more comfortable asking questions in the less formal lessons that I had been giving her.

She continued her lessons with me for about a year and I turned her over to another peer of mine, who was more accomplished than myself. I considered her in better hands and I felt good about the progress that we had made during our lessons.

It wasn’t until several years later that I learned that the formal school curriculum had changed to encourage informal lessons by older students. Apparently, the technique really worked well — well enough to formalize as part of the curriculum.

Written by newcome

January 14, 2010 at 6:49 pm

Posted in Uncategorized