Dan Newcome, blog

I'm bringing cyber back

Archive for January 2010

Failure is not the goal

with 3 comments

“Fail often” is a recurring mantra among many startup thought leaders, but there is a disconnect between the meaning of the phrase and the semantics that it is intended to convey in the context of startups that has been grating on me.

I understand the intent of statements like this, but I really think that the oversimplification of the creative process is starting to hurt us unconsciously. Mindless failure is not productive, and since our mantra is to fail, it makes it ok to give up too easily. What we should be saying is that we should try things that could fail more often–be unafraid to fail, which does imply that we will fail more often, but I think is better connected to what we are really trying to achieve. When I sit down to play guitar, I don’t say “I want to play more bad notes”, but I know that I won’t learn more challenging material without doing so in the process.

Seth Godin said “good ideas come from bad ideas”. Although I reject the notion that an idea is either good or bad intrisically, the underlying theme is that ideas that work are necessarily a subset of all ideas that are on the table. It turns out that the most effective–and in many cases the only–way of finding the ones that work is to try them all.

I would say that it might be wise to try out only that have some hope of succeeding, but it seems overly optimistic that anyone can achieve this without inadvertently killing some good ideas in the process.

The goal is not to fail–the goal is to learn.

Advertisements

Written by newcome

January 28, 2010 at 10:59 pm

Posted in Uncategorized

Thoughts on Apple’s new tablet: the iPad

with one comment

I’ve been writing about tablet devices on this blog, so I wanted to put my initial thoughts out there on today’s announcement of the iPad. I wrote earlier that in order to succeed where others had failed, a new tablet device would have to re-invent the UI completely. What I failed to notice was that Apple had already done that: with the iPhone! All of the research and development that Apple has poured into the development of the iPhone interface applies roughly to touch interfaces of all sizes. The first innovation was subtle: dropping the pen. Early on in the mobile device market, the pen seemed to be the only logical way of interacting with a touch sensitive device. Fingers obstructed the view, and were low resolution (especially if you have bigger hands). Of course, Apple solved this by changing the UI instead of trying to solve the problem with the input device.

I fully expected to see no pen interface and no handwriting recognition natively, and so far this seems to be the case. One thing that I didn’t know was whether the iPad was supposed to be a full replacement for a laptop. The answer according to what I’ve seen of the keynote so far is no, it is not. A key takeaway is that Steve, in his introduction of the device, asked the question “what comes between the iPhone and an laptop? The answer to that question would have to do some things better than either device to justify its existence. Confusing the issue slightly for me was the announcement of the iWork office suite for the iPad. Given the focus of the tablet on not being a laptop replacement, I’m surprised that this made the cut. However the logic might be that in order to bridge the gap effectively, there had to be at least some level of compatibility with native document formats. Interestingly Apple’s story with the iPad is much different than Google’s story with ChromeOS. Apps rule the day still on the iPad rather than embracing the web fully like ChromeOS. I predict that this will change eventually, but Apple is obviously doubling down on the App Store in the meantime.

Overall, the iPad is more similar to an oversized iPhone than anything else. However, it has a less focused personality than the iPhone. Remember in the iPhone keynote Steve hammered home the point that the iPhone was three things: an iPod, a phone, and an Internet mobile communicator. By Jobs’ own description, the iPad is an eBook reader, a movie player, email device, gaming device, and more. The iPhone can do these things as well, but Apple is very good at focusing consumer attention so that they understand the device better. I think that if there is one failing of the iPad pitch, it is that I’m still not convinced that it does one of these things so much better than what I have that I need to run out an buy an iPad to do it.

Written by newcome

January 27, 2010 at 2:32 pm

Posted in Uncategorized

Source control in the cloud

leave a comment »

I’ve been experimenting with using GitHub for a few small projects that I have in mind, and I just had a minor revelation about having source control as a web service. I checked in a file and realized that I didn’t finish commenting something important in the header. Ordinarily I would have gone back to my development environment and made the change and checked it in. However, while viewing the file on GitHub I noticed that I could edit the file online in the browser, with nice source highlighting and all.

Given that I’ve set up countless source control servers over the years it seems as though this kind of thing would be minor, but it really just cemented a few things for me in terms of the impact that better tools for source control might bring in the future. Although I typically install some kind of web server for online viewing of the source, I’ve found many of these tools difficult to set up and even more difficult to use for anything other than basic source viewing.

I’ve been a Subversion user and proponent for several years now, having moved out of a dark period with SourceSafe some years ago. I was able to convince my former employer at the time to migrate our codebase out of SourceSafe and into Subversion (using the vss2svn conversion script).

For all of the hype that projects such as Mozilla’s Bespin project have been generating about running your IDE in the cloud, I can see that the real importance of putting tools in the cloud is the ability to do things like managing builds and running tests — all of the things that you do on a server in your own environment anyway.

Beyond convenience, using services like GitHub could provide a way for the members of your team to collaborate on your codebase informally and experimentally. Forking the codebase is easy, and there is better visibility as to what the fork is trying to accomplish — there have been many times wherer I would see a branch in one of my SVN repositories where I didn’t know what the branch was for right away.

Written by newcome

January 25, 2010 at 1:50 am

Posted in Uncategorized

Stereo imaging for tutorial images

leave a comment »

I was watching a Discovery Channel interview with James Cameron about the filming of Avatar. One thing that struck me was his comparison of shooting in 3D to shooting in color. When color film was first introduced in cinema years ago, studios didn’t stop shooting black and white overnight, there was a period of overlap. However, there was a watershed moment for color films: the introduction of color television. Once color TV was popular, it was clear that the studios had to produce color films if they wanted to show them on television. This single move ushered in the era of color films, and now we only see black and white in the case of the occasional special effect. Cameron posits in the interview that 3D is likely to follow a similar trajectory. If some development in home theater or television enables casual viewing of 3D video, the norm for the movie industry will be to shoot in 3D.

Extrapolating these ideas a bit, it doesn’t seem unreasonable for us to be taking stereoscopic still images in the very near future. Searching around I’ve seen a few hacks that take a pair of digital cameras to create a stereoscopic still camera. I just wonder what the watershed moment will be for stereo images. Currently special techniques are needed to view stereo images, so their usefulness is currently limited. However, if we end up using some sort of stereo eyetap heads-up display interface for viewing digital media in the future, it could come to pass that we expect images to be shot in 3d.

What does this mean for tutorial images? I think that this could be really great for DIY and hardware hacker articles. I’ve been planning on writing a post on taking good pictures of things like electronic devices since I’ve found it to be a lot harder than I expected, but that will have to wait for another time. But in the meantime I’ve been thinking a lot about how to get better images for my teardown and modification articles. The biggest issues that I’m having revolve around lighting, focus, and depth-of field. However, choosing the angle of the shot is very hard in certain cases, such as when you are trying to show the position of an assembly relative to its neighbors. In cases like this 3D images would be a fantastic tool in my photo arsenal.

Written by newcome

January 24, 2010 at 4:40 pm

Posted in Uncategorized

Is the cognitive surplus real?

leave a comment »

I re-read some of  Clay Shirky’s writing about the idea of cognitive surplus recently. While the ideas are powerful and well researched, I still have some misgivings about the value of the supposed cognitive surplus that was soaked up by television over the years and is increasingly channeled now online.

While television is widely derided as a intellectually vapid activity and the Internet is somehow clear of such a stigma (for now), my experiences online point to the idea that online pursuits vary widely in their general worth to society. Take Wikipedia as one extreme end of the spectrum and something like Perez Hilton as the other. At its worst, the Internet can offer just the same cheap thrills and mindless entertainment that television did.

Now that we’ve established the variance in online activities, what do you think the distribution is going to look like among television defectors? I’m willing to bet that it isn’t going to be skewed toward high-value activities. The cognitive surplus is only going to be real if people are motivated to share their productive gifts with society via the Internet rather than use it as a passive sink just like television.

Written by newcome

January 17, 2010 at 4:07 pm

Posted in Uncategorized

Recording webcam videos with VLC Media Player

with 121 comments

I have been recording short videos using the webcam on my laptop using a trial version of some video software that I found on the net. I had also been using the free Yawcam to snap stills, but I didn’t figure out how to get it to record video. It apparently can periodically save still frames or stream over HTTP, but what I wanted in the end was an .mpg file. I searched around the net to find an open source program that would record video from my webcam but I came up empty. Cheese seems like a good option under Linux, but my laptop is running Windows right now, so that doesn’t help me. If anyone knows of something let me know in the comments. It’s probable that one of the open source nonlinear editing programs is able to do this, but I don’t know how to do it.

I’ve used VLC media player to play videos on Windows and Linux for a long time, and in my search for webcam software found that it can supposedly record video from a live source, so I decided to give it a try. The tutorials that I found were mostly outdated, so it turned out to be pretty frustrating to get working, which is the primary motivation for writing this post. Hopefully others will be able to get this working on the current version of VLC (1.0.3 at the time of this writing) more easily than I was able to.

Just a warning, I haven’t gotten this to fully work the way that I wanted using the GUI yet, so the final solution presented here will be a command line invocation of VLC. It turns out that this is more convenient since there are a lot of tedious steps to go through that are completely automated when using the command line.

Foreword on VLC

Unlike many video programs on the Windows platform, VLC does not use any external codecs or filters. It is completely self-contained. This provided a major source of confusion for me initially, as I was looking around endlessly for the Xvid codec that I wanted to use only to find that it was never detected by VLC.

Even though VLC is self contained, its functional elements are arranged into what the VLC authors call modules. This is important to understand when trying to chain together the functions that we want on the command line. The most helpful synopsis for me was found here, and I’ll put the general form inline here for reference:

% vlc input_stream --sout "#module1{option1=parameter1{parameter-option1},option2=parameter2}:module2{option1=...,option2=...}:..."

The commandline shown above is for Linux systems, but the important thing to notice is that the first module is referenced using #module and subsequent  modules are referenced using :module. Also, options to modules are enclosed in curly braces {…} and may be nested. Nesting will be important when we try to split the stream so that we can both record it to disk and monitor it on the screen during recording.

I noticed some inconsistency in the documentation that I found concerning the argument formats that are supported on various platforms. For example –option param syntax is not supposed to work on Windows, but it appears to in most cases.  We will adhere to the Windows –option=param form however.

VLC is also very flexible and consequently is complicated when it comes to setting up all of the options required to create a seemingly simple mpeg stream. I never knew about different mpeg container formats for network broadcast vs local media (PS vs TS) before this, and it is debatable that it is that useful unless you are into video pretty heavily. You won’t need to look at this to do follow what we are going to do here, but it was an issue when I was trying to figure this out, so if you go off the beaten path there may be more to figure out than you think.

Some of the codecs are very strict about the options that they will take, and you won’t get detailed information about what went wrong unless you have enabled detailed logging. This is covered in the first part of this tutorial. One such gotcha that hit me was that mpeg-2 only supports certain frame rates. The VLC codec adheres to these restrictions rigorously, and if a valid frame rate is not specified you will get a cryptic error about the codec not being able to be opened. Similarly, if no frame rate is specified VLC will not default to something that works, so you have to figure out what went wrong on your own.

Building the commandline

Invoking VLC is as simple as running vlc.exe. However we would like to turn on some extended logging while we are trying to get our options set up correctly. Otherwise issues such as the encoder failing to open will not be easily solved since we won’t know exactly what is going wrong.

The very first thing we should try is to make sure that we can open the webcam with extended logging enabled. The webcam device on my laptop is the default device, so we can open it using dshow:// as shown in the command below. We turn on logging using the –extrainf option with the maximum level of verbosity specified using the -vvv flag. A small warning: mute the microphone on your computer before running the following since you might get a feedback loop that is pretty loud. We will fix this later by using the noaudio option to the display module.


c:> vlc.exe dshow:// --extrainf logger -vvv

If all goes well you should see a VLC window showing the output of your webcam. The only thing left now is to transcode the video stream into mpeg-2 and save it to a file (all while showing a preview window), which turns out to require some VLC module gymnastics.

Transcoding

The main task that we are trying to accomplish is actually transcoding the stream, which is the term for encoding the stream as mpeg to be saved to a file. The output of the webcam is in an uncompressed format, so we need to run it through a codec before we can save it to disk. The following command uses two different modules: transcode and standard. Transcode lets us create an mpeg stream and standard lets us package it into a container and save it to disk. This seems pretty straightforward, but there are some voodoo options here that I saw in the examples online but didn’t find very good explanations for. Setting audio-sync for example. Do we ever want un-synced audio? The important part that seems to be left out of many examples is the setting of the frame rate and the size. Failing to set the frame rate using the fps option caused the encoder to fail for me. Failing to set the width caused problems later when I tried to preview the video stream during recording.


c:> vlc.exe dshow:// --sout=#transcode{vcodec=mp2v,vb=1024,fps=30,width=320,acodec=mp2a,ab=128,scale=1,channels=2,deinterlace,audio-sync}:standard{access=file,mux=ps,dst="C:\Users\dan\Desktop\Output.mpg"} --extraintf=logger -vvv

Monitoring the stream

Using what we have so far will get us a stream on disk, but we can’t see what we are doing on the screen. Fortunately VLC has a module called display that will let us pipe the output to the screen. Unfortunately we can’t do that without also using the duplicate module to split the stream first. Using duplicate isn’t too complicated, but it took me a little while to find out how to use the nesting syntax that is needed to get it to work. The general form of the duplicate module is:


duplicate{dst=destination1,dst=destination2}

Where destination1 and destination2 are the module sections that we want to send the stream to.  The only confusing part is that we have to move our standard module declaration inside of the duplicate module definition like this:


duplicate{dst=standard{...}}

Once we have this form, we can add other destinations like this:


duplicate{dst=standard{...},dst=display{noaudio}}

We have added a second destination to show the stream on the screen. We have given the option noaudio in order to prevent a feedback loop since by default display will monitor the audio.

My final command looked like this:


c:> vlc.exe dshow:// --sout=#transcode{vcodec=mp2v,vb=1024,fps=30,width=320,acodec=mp2a,ab=128,scale=1,channels=2,deinterlace,audio-sync}:duplicate{dst=standard{access=file,mux=ps,dst="C:\Users\dan\Desktop\Output.mpg"},dst=display{noaudio}} --extraintf=logger -vvv

I put the command into a batch file, and now I can create an .mpg file by running the batch file. Some possible improvements could be to parameterize the file name and perhaps allow for setting the bitrate, but for now this suits my needs perfectly.

Written by newcome

January 17, 2010 at 12:05 pm

Posted in Uncategorized

Cycle time of an online community

leave a comment »

I have taken part in many online communities over the years and I have noticed that no matter what, there are certain cycles that tend to happen. There have been articles written about how sites such as Reddit or Digg change longitudinally over time, but one thing that I’m most interested in is the steady-state  ‘cycle time’ of the community.

I would loosely define the cycle time as the length of time it takes a new member of a community to be fully exposed to the range of content and activity that will likely ever occur in the community. Inevitably at some point the new user will start to see mostly repeat topics.

I can go back to some of the music production forums that I was on nearly ten years ago and find people asking the same questions. When I was active in the community though, some things cycled very quickly and other things took a really long time to fully cycle. It was not even apparent to me at the time that I had fully cycled through the community experience. Looking back it seems more obvious though.

I’m not sure how to expand on this idea yet, so I will leave this post as-is. Hopefully I’ll revisit this one though.

Written by newcome

January 15, 2010 at 6:35 pm

Posted in Uncategorized