Dan Newcome on technology

I'm bringing cyber back

Archive for January 2010

Zero an old hard disk using dd

with 2 comments

Any time I get rid of a hard disk, I always overwrite the whole drive with zeroes. I know that this is not a secure practice if you are going to be selling the drive, but since the drive is going to the computer recycling center and the data isn’t a matter of national security, a quick wipe should be sufficient. If you want to resell the drive I’d recommend something like DBAN which will overwrite your data properly so that it cannot be retrieved. Practically though, zeroing a drive is enough to keep most people from retrieving the data. A drive that is on the heap with hundreds or thousands of other drives isn’t likely to be scrubbed for data anyway. I could be wrong on this, and anyone in the drive recycling business can chime in and enlighten me, but most of it probably gets shredded for scrap right?.

I use a cheap USB IDE/SATA hard drive converter to plug the old drive into my computer and then boot the computer with the Knoppix GNU/Linux distribution. Once I’m logged in, I use the following command to overwrite the whole drive with zeroes:


dd if=/dev/zero of=/dev/<drive> bs=1M

Replace <drive> with the device that represents the disk to be zeroed. Using the `dmesg’ command is helpful in determining the device name of a removable USB drive.

To check the progress we can open up another terminal window and do this:

$ while ( true ); do { kill -s USR1 <pid>; sleep 5; } done

Replace <pid> with the process ID of the dd process that is running in the other terminal window. This will cause the running `dd’ command to report its progress every 5 seconds to the terminal that it is running in.

This technique could be extended to use /dev/urandom to write random data to the drive also, but generating random data slows things down significantly on my machine and I don’t want too many more excuses standing in the way of getting rid of stuff that is taking up space in my office!

Advertisements

Written by newcome

January 14, 2010 at 8:51 pm

Posted in Uncategorized

DIY wins the day

leave a comment »

In the mid-90s, I was a college student, newly relegated to a dormitory room after having had ample room for my music endeavors at my parents’ home several hours across the state of Pennsylvania.

What I lacked in space, I made up for in newly-acquired access to the wonderful World Wide Web of information. You see, having a dorm room ethernet connection was my first link to the world outside of single-user bulletin board systems that I used to dial into in high school. Those boards had something called `email’ that was sent over the `Internet’, but the power of such things were masked to me because they were hidden behind the disconnected nature of the dial-up bulletin board.

How does this relate to making music? In high school, I was very involved in recording bands using the best gear that I could afford. This included an enormous Tascam 38 1/2 inch tape machine and associated mixing desk, along with DBX noise reduction units, snakes, and requisite wiring harnesses to make the whole thing work. I learned by trial and error on this cumbersome rig using time-consuming tape handling and splicing techniques. During this time, I became aware of many independent music labels and bands embracing the DIY or `do it yourself’ ethos of recording music. DIY was something I could certainly relate to, because that is exactly what I was doing! However, I was missing an essential idea that was espoused in the burgeoning DIY scene: I was trying to hard to be `good’ — to be perfect.

The idea of not obsessing about the technical details of the recording was so endemic in the DIY scene, that it had its own term: lo-fi. The way that the term was bandied about didn’t sit well with me. Of course the idea of making records was to make it sound like a `real’ recording — like something that you would hear on the radio — something that would separate you from the amateur recording engineer.

Being separated from my beloved recording rig, I sought a new outlet for my recording urges. Luckily, I had gravitated to recording-oriented Usenet groups and found what I thought was a good temporary solution: the lowly 4-track cassette recorder. I would buy a cheap unit and when I would get home for the summer I would do `real’ recordings on my `real’ equipment.

Little did I know that buying a 4-track would change the way I thought about recording music forever. I became fearless. Having constant access to a recording device was intoxicating. I quickly filled up tape after tape with reckless abandon, owing to the fact that tapes were so much cheaper than 1/2 tape reels that I was used to, that it almost felt free. There was practically no set up time. I could record anything I wanted to anytime. If I had a riff in my head I could record it in seconds rather than rolling out tons of equipment and spending hours setting it up just to get to the first step of actually getting a sound on tape.

I was now recording anything and everything. I began building a corpus of sound bites that I would go back to for more  inspiration. This iterative process of coming up with ideas did not exist for me when the barrier to entry was high just to commit something to tape. As profound as this change was, it didn’t hit me fully until later, when I got back home to my big recording setup.

As personally liberating as the 4-track was, I didn’t see the same reactions in other people. Not yet. Lo-fi music remained an underground phenomenon, albeit an influential one, but self-recorded, self-released music was regarded as inferior by most people.

Technology has a funny way of accelerating things in ways we don’t fully understand until we are profoundly affected by the change. We are in control one minute, and hurling toward an unknown destiny the next. Take the music industry for example: technology worked in its favor during the heyday of the Compact Disc, but was its undoing in the era of the mp3. Both technologies were digital distribution forms, but the music industry miscalculated how long it could continue to milk the CD cash cow before turning its attention to digital downloads. Mp3s were good enough to be disruptive but not good enough for the incumbents to take notice until they were reeling from the impact.

Fast forward to today, where we live in a world of mass-market products and cheap goods. Cost of distribution is approaching zero for many things, and on-demand production is a reality. Authenticity is the new scarcity. Sites like Etsy thrive on people’s desire for handmade products.

Similar sentiments are emerging in the world of web technology. Standards are essential to enabling communication on the web, but complexity is the enemy. Standards that are too complicated are difficult to implement correctly, limiting its effectiveness. The idea of worse is better, is not so much that worse is the goal any more than lo-fi was the goal of indie recording artists. It was simply a side effect of a mantra to reduce complexity and increase communication, whether it is creative ideas in the form of music or code.

Written by newcome

January 14, 2010 at 7:15 pm

Posted in Uncategorized

Do experts teach best?

with 2 comments

I’ve taken the title of this post from one of the sub-topics in this article about learning to learn. The article makes several good points and is certainly worth a read, but I want to focus on one question that was brought up near the end of the article: who should teach?

The article’s argument pointed to the fact that an expert on a subject may be blind to things that a student needs to know. To the teacher some things might be taken for granted, so they aren’t communicated well. On the student’s side, enough insight may not be present to even articulate the need. Beginners need to see the process, not perfection.

When I was taking music lessons in school, I had a private teacher that was also a teacher at the school that I attended. Fortunately for me, he lived close by and my parents were able to afford to pay his private lesson rates. I made significant progress under his tutelage, but it wasn’t until much later that I  learned that there was more potential for me to progress in my studies than I thought.

Fast forward several years, and I found myself in a position to informally teach a student who was a friend’s younger sibling. The family was happy to compensate me for my time, which I gladly accepted. The compensation was much lower than what an accredited music teacher would have earned, but since I lacked any credentials, the arrangement was certainly appropriate.

Upon hearing about my student’s informal arrangement, her teacher (who was also my former teacher) expressed concern that she should not be taking lessons from anyone that was not a certified music instructor. Ordinarily that, as they say, might have been that. However the teacher noticed that the student had been progressing faster than her peers and the student cited being more comfortable asking questions in the less formal lessons that I had been giving her.

She continued her lessons with me for about a year and I turned her over to another peer of mine, who was more accomplished than myself. I considered her in better hands and I felt good about the progress that we had made during our lessons.

It wasn’t until several years later that I learned that the formal school curriculum had changed to encourage informal lessons by older students. Apparently, the technique really worked well — well enough to formalize as part of the curriculum.

Written by newcome

January 14, 2010 at 6:49 pm

Posted in Uncategorized

Running a Linux GUI app on a headless machine

with 2 comments

We all know that several remote desktop solutions exist for Linux, the most popular being VNC. Sometimes I don’t want a full desktop login though – I just want to run a single application. Linux is able to do this by design, but we do have to install a piece of software on the client machine if we aren’t running Linux.

My scenario is as follows: I have a Dell SC40 server that hosts has some mp3s that I want to burn to CD. The server is headless — it has no monitor attached. My laptop does not have a CD burner, but the server does. The server is running the CentOS 5 GNU/Linux distrubution. The laptop is running 64-bit Windows Vista Ultimate. The CD burning software that I like to use under Linux is k3b.

Ok, so k3b is installed on my CentOS distribution, and I can use Putty to connect to the server from my laptop using secure shell. What we ultimately hope to do is to log in using Putty and simply run k3b from the commandline and have the GUI show up on the laptop. In order for the GUI to show on the laptop we need a piece of software called an X server. Since we are on Windows, this is not included with the OS so we need something like Xming.

Install Xming and run it. When Xming is running you will see an ‘X’ icon in the system tray. The default settings will do fine for our purposes here. The only tricky part is configuring Putty. We need to enable X forwarding and set the display variable to localhost:0. You’ll have to drill down to get to the setting as shown below.

Now that Xming is listening on the laptop and we have logged in to the server using Putty with X forwarding enabled, we can start any GUI app at the commanline and the GUI will display on the laptop using Xming.


# k3b

The result looks like this:

What is happening here is that the server machine is sending the X11 control commands back to the Xming software running on the laptop over a forwarded tcp port provided by Putty. It’s a little confusing unless you know how Putty and X11 work, but setting it up still isn’t too hard.

Keep in mind that any GUI application can be run this way. Another favorite usage of this technique is monitoring my server backups to CrashPlan, which offers a GUI for administering your offsite backups.

Written by newcome

January 12, 2010 at 6:47 pm

Posted in Uncategorized

Lazy css gradient hack

leave a comment »

I was creating a new site template for a website that I’m working on and I found myself in need of some gradient images for the header bar at the top of the page. I fired up Gimp and whipped up a png file to use for the top bar. As I progressed through the site design I realized that I wanted to change the color scheme, which meant that I had to generate another png in Gimp. Not knowing how many iterations of this I was in for, I started looking for other solutions.

There are several ways of getting gradients done programmatically. I found one that even pre-generated the html markup on the server. There were also several that used Javascript to generate a series of <div> elements or a <canvas> element that contained the generated gradient image.

However, I wanted a quick way to experiment with my color scheme without pulling any extra code temporarily. Most of the gradients that I use fade from the foreground color to white, so I thought that just using an overlay image that faded from fully transparent to white should give me the effect that I was looking for.

Here is the image file that I generated using Gimp. Note that I’m showing a Gimp screenshot of a wider image below in order to illustrate how the image was created.

In order to achieve our effect, instead of a single <div> element with a background image set, we need two extra <div> elements — one to serve as a container, and another to serve as the backdrop color that will show through from under the transparent overlay. Here is the html needed (the boilerplate html tags have been omitted for clarity):


<div id="container">
    <div id="background"></div>
    <div id="overlay"></div>
</div>

Unfortunately we need a comparatively large amount of css, but it still isn’t so bad. Notice that the main trick here is that the container serves as the positioning parent for the two elements that make up the gradient bar. The #background and #overlay elements are absolutely positioned with the same height and width so that they will occupy the same x and y positions.

#container {
    position: relative;
    height: 120;
}

#background {
    position: absolute;
    background-color: green;
    height: 120px;
    width: 100%;
}

#overlay {
    position: absolute;
    background-image: url('images/white-grad.png');
    background-repeat: repeat-x; height: 120;
    width: 100%;
}

If everything is set up as expected, our page will look like this:


Not incredibly impressive until we try the following: change the background-color of #background to black.

#background { … background-color: black; … }

Now the gradient bar is black, and we didn’t have to go back to Gimp to create any more images.
If you look closely at the images you will notice that the main drawback of this method is that the gradient is not very smooth. I only tested this in Firefox, so maybe it looks better in other browsers. It doesn’t matter to me since I will create the desired png file when I have my color scheme figured out.

Hopefully this technique is useful to someone, otherwise just grab one of the fancy Javascript gradient libraries that I listed above and get to it.

Written by newcome

January 11, 2010 at 11:39 pm

Posted in Uncategorized

Personal note taking and information management

with 7 comments

I have been meaning to write a series of posts related to how we use personal information and notes in our daily lives. I have been living and breathing this topic over the last few years, culminating in the development of Ubernote. I don’t even know where to begin with this expansive topic, so I thought that I’d just start writing blog posts, and start to crystallize things along the way. My interest in digital note taking is also being rekindled by having finally gotten a smart phone this year, and by the prospect of a new class of tablet devices that may be on the horizon.

I should preface the following by noting that I’ve always kept notebooks. From the time I was a little kid tinkering with electronics and drawing goofy pictures, I kept a series of notebooks (many of which have regrettably been lost or destroyed and may be a subconscious driver for keeping data in the digital domain). Information means different things to different people, and so it follows that they way that information is recorded, organized, and tracked varies just as widely as people do.

My story of electronic note taking goes something like this: somewhere in high school I got an old laptop from a relative that was really only powerful enough to run old DOS programs. I used the DOS edit command to write notes on this computer. Some examples of things that I would have written here were set lists for the cover band that I was playing in as well as drafts for papers that I wrote in high school. The most critical habit being formed here was a set of text files that I started keeping with todo lists and things that I wanted to buy or keep track of. The format of the text files was basically a tab-indented outline, similar to what the old Mac outliners of the day might have looked like if you were to print the outline to plain text. I don’t recall having used Lotus Agenda at this time, but it would have been something that I wasn’t ready for yet.

In college, I relied on folded-up pieces of paper or small notebooks. This was mostly out of necessity, as I didn’t have a cell phone or laptop. I needed to have ready access to most things, and carrying a floppy around wasn’t going to cut it. I don’t think that USB drives really existed yet at this time. Basically college was the dark ages for me in terms of PIM tools. After graduation I began using a tool called KeyNote (no, not the Mac program of the same name).

Around the same time, I got a Sony Clie. I vaguely recall using several programs such as List Pro and Shadowplan. I’ll have to go back through my notes around this time to figure out exactly what my criteria were for choosing a notes solution, but I think that I was looking for some power-user features. The PDA didn’t quite live up to my expectations in terms of convenience. I was impressed with the handwriting recognition of the Clie, but it was the syncing problem that annoyed me the most. I didn’t want to have to keep syncing the device in order to have the latest versions of my notes. I should point out that the desktop-based solutions had this issue too, since I would need to sync between my different computers at work and home also.

I changed jobs and found myself using Microsoft OneNote since my new employer had a license for Microsoft Office products. I used OneNote for about a year during my time there, and I have quite a number of complaints about it, which I will have to get into later in another post. Somewhere in the time just before I left the company, an intern showed me Evernote. I immediately started using it at home and I ended up dumping my older notes into it pretty much right away. I didn’t start using it yet at work, because my boss had an unreasonable obsession with banning any non-Microsoft tools, so I was afraid to be seen using it at the office.

The shortcomings of Evernote form the basis of my current obsession with note taking software today, which I’ll get into in a future post. Stay tuned, notes junkies.

Written by newcome

January 1, 2010 at 12:59 pm

Posted in Uncategorized