I have been mulling over the idea of writing a guide for Pure Data aimed at developers proficient in at least one other more mainstream programming language. Pure Data is so different than many other programming environments, yet its utility is such that I keep coming back to it despite the difficulties in managing code written in it using traditional techniques.
I’ve written a couple of posts in the past dealing with things like connection order and event processing precedence. However, the little oddities of PD run pretty deep, and I’m afraid that it is quite an involved subject to start in on.
Plenty of other books have been written on PD in the past, and I don’t think I want to write just another PD book. I think there has been enough written on general signal processing techniques and also on programming PD in general. There are some idiomatic things in PD that come up again and again, and I don’t think I’ve seen a concise guide for PD aimed at programmers that already code in other languages and already are familiar with signal processing in general.
Here is a rough list of topics I might cover if I write such a guide:
Bang vs everything else
Lists, atoms and messages
Top down, left right (sort of)
That’s a good start. I’ll add to this list soon hopefully.
Many of you know (or should know, if you have anything still on Posterous!) that Posterous is shutting its doors following its acquisition by Twitter. I was one of the first Posterous users in 2008, and they even gave me many more blogs than were usually allowed on the service at the time. Heady days, those.
One by one I have been moving blogs to the open-source static blog software, Octopress, which I’ve been hosting on Heroku instances. However, now that Posterous is shutting down, I need to move the last few off, so I’m writing up this post to help anyone else that wants to do the same. Sure you can use their export tool to get a tarball of your stuff, but if you are lazy like me, and just want to get stuff over to Octopress, look no further than this ruby script.
$ gem install posterous
Log into Posterous, go to the api page and get an API key by clicking on “view token”.
You need to know then name of your blog, the username and password, and the API key. Then run:
$ ruby posterous-export.rb username password apikey
I had to patch the Posterous gem to get things working. Otherwise I got this error:
/Users/dan/.rvm/gems/ruby-1.9.3-p374/gems/ethon-0.5.12/lib/ethon/easy.rb:234:in `block in set_attributes': The option: username is invalid. (Ethon::Errors::InvalidOption) Please try userpwd instead of username.
Running the script gets you a file layout on disk including images and HTML-formatted post files, ready for use by Jekyll/Octopress.
To get the new Octopress blog running, just clone the repo and copy the images/ and _posts directories under the octopress/source directory.
I’ll do another post probably about working with/customizing Octopress so I won’t go into configuring Octopress here. Presumably the API shuts down on April 30, so don’t wait too long!
I’ve been tracking various input methods over the course of this blog, providing commentary on tablets, the death (and possible rebirth) of the stylus, touch computing and now with the Google Glass – wearable computing.
I hesitate to call the Glass a wearable computing device, putting it in the company of the clunky hardware of the past, but along with the new crop of smart watches, I think it’s still an accurate category.
Anyone who has followed the wearable computing space for a while will notice that most adherents use a device called a “twiddler” for text input. A twiddler is a one-handed keyboard-like device that allows the user to (rather slowly) touch type without having to look at the device or set it down.
Glass obviously doesn’t ship with a twiddler. Glass relies on voice commands instead. Of course there is nothing to prevent you from using the keyboard on your mobile device for text entry, but that hardly counts as a seamless heads-up experience that Glass promises.
We seem to have gotten used to people roaming the streets apparently talking to themselves when using hands-free devices, but are we ready for the full monty of the Glass camera pointing around and having people muttering to themselves all at the same time?
What about privacy of the wearer? Having to issue voice commands is hardly subtle in many environments.
Fortunately, the simple fact that Glass has persistent Bluetooth connectivity and a display can provide more feedback options than a simple twiddler. A system like Swype could work really well if the keyboard was projected to the wearer while input was received from the phone’s touch screen.
So several closing points:
- It seems unlikely to me that many people are going to embrace an input method that requires a separate device or a new learning curve.
- Most people are used to touchscreen keyboards by now, and most devices that are likely to be paired with the Glass already have them.
- Tactile feedback can be replaced by visual feedback by virtue of the heads-up display with modifications to the keyboard software.
In light of these points, I don’t see the twiddler making its way into the new crop of wearable devices. For heavy lifting, there are plenty of Bluetooth keyboards around if you don’t mind looking like you are typing off into space. For everything else there is your phone (duh!).
If you read this far, you should probably follow me
Inset photo credit: Irene Yan
Ahmet Alp Balkan wrote an interesting piece on what you should open source at your company recently. I like his assertion that anything you’d likely need at another job should be open sourced. Some other influential programmers have asserted more aggressive stances on this, but I think Ahmet’s idea is a good start. You should check his article out now if you haven’t seen it already.
Many of my experiences with open sourcing code that isn’t purely a personal project have followed a trajectory of internal release and eventual open sourcing. I think even trying to decide whether or not some code is critical to your particular business is jumping the gun. I really took to (Github founder) Tom Preston-Werner’s readme-driven development treatise for these releases. If something was a concrete enough idea to put together a concise readme document, the project should be pulled out into something for internal release, even if it was only used on the current project initially.
I called this “inner sourcing” the project. I’ve since seen some other references to inner sourcing code so it seems I’m not the only one that thinks this way.
The process generally involved creating a module within the parent project, creating the readme and sending out an internal email to the company announcing the project. In the beginning it felt kind of silly to send out these announcement emails but eventually everyone started to get the idea of announcing these little internal projects.
When I was working for a small consulting company back on the East Coast, I created a small DSL (domain-specific language) for writing queries against Microsoft CRM called CrmQuery. When I came up with the idea I wrapped up the project in a separate repository and created a more general build system to build for several of the runtimes in use with our various clients at the time. I wrote the readme as if I were putting it out to the world and no one had any other internal context of our environment. I think that this is an important thought exercise and improves the quality of the project even if it never makes it outside of the company.
CrmQuery ultimately saved us tons of time and got used on every CRM project we did after I released it. When I put it on GitHub later I got some more feedback on the project that improved it even more. I get people commenting on my projects and filing issues through GitHub all the time. This certainly is much more helpful than having the code just sitting in your private repository.
Ultimately I ended up getting some other consulting clients from people finding out about CrmQuery and a CRM test double service I wrote called FakeCRM. There isn’t much better of an endorsement of open source than that!
I’ve been shuffling my sites around lately, canceling some virtual machines that I don’t use much and consolidating sites that get less traffic onto cheaper hosting. I’m mostly using Apache and MySql on these sites along with Node.js. I’m looking at moving to Nginx in front of the Node.js sites though.
Anyway, most of the work here is moving what is in the web content directories and my MySql database directories.
$ sudo service mysql stop $ tar cf ~/mysql-bak.tar /var/lib/mysql $ tar cf ~/www-bak.tar /var/www $ tar cf ~/apache-config.tar apache2
On the new server we need at least MySql and Apache
# sudo apt-get install mysql-server # sudo apt-get install apache2
I was able to copy my previous Apache configuration over from the old server and reuse it. I copied the symlinks for sites-enabled and mods-enabled, which was pretty nice.
I used to install node.js from source, but this time around I installed from apt. I figure Node is more stable now, so I’ll give it a shot. Same with NPM.
# apt-get install nodejs # apt-get install npm
I had to symlink the nodejs binary in order to get it working with forever:
# ln -s /usr/bin/nodejs /usr/bin/node
However forever still isn’t working for me. It’s looking for daemon.js, which I installed using npm.
Error: Cannot find module './daemon.v0.6.19'
I had to grant all privileges on my MySql databases instead of just CRUD stuff like I used to. I’m not sure why this is yet.
All in all, moving a Linux VPS isn’t too bad if you can reuse most of the configuration. More on this later.
If you’ve followed me for a while on this blog you know I’ve written about some different UI/UX paradigms in the past, mostly focusing on the rise of tablets and touch computing and the passing of the stylus into niche areas and near obscurity.
The scary and exciting thing about all of these changes is that they happen nearly overnight with the launch of a pivotal product and the entire market shifts. The most amazing thing about these changes is that once our perceptions are changed, we forget the old paradigms seemingly overnight and the new changes permeate the way we interact.
Google did this with search, Apple did it with the iPhone. I think Google is going to do it again with the Glass.
Once I had constant and immediate access to my online world via smartphone, opening up my laptop seemed almost baroque. I think that the Glass is going to have the same effect, moving us one step closer on the connectivity continuum.
Having such immediate access to data is going to challenge us in new ways, and context is going to be more important than ever to avoid information overload. Devices that are tuned to where our attention is focused can be powerful allies in giving us contextual information.
#ifIHadGlass I would focus on solutions that learn the wearer’s patterns of attention and focus to build smart, contextual maps of their online and offline existence. Otherwise instead of Glass being one step closer to seamless integration with our data, it’s going to drown us in it faster. As one of the founders of Ubernote, I understand personal data patterns and I’d really love to bring the level of context that Glass could provide to bear on the problem of personal data.
We already have some contextual clues via GPS and browsing history, etc. However, with a visual data stream of attention and focus I think we can extract a lot more data about the contexts in our daily lives. Along with other data from devices like the Somaxis MyoLink, Nike Fuel Band and the Zeo Sleep Manager we can understand a lot more about ourselves.
I’m playing around with some ideas posed by a friend of mine lately in Ruby. I’ve done some Rails hacking in the past but I don’t usually get far off the beaten path in Ruby. Well, except for that time that I hacked up a version of Mongrel to try to make it a streaming HTTP server before node.js was released. That was pretty awesome.
Anyway in order to get this stuff working I had to patch the ruby readline gem (rb-readline). Initially I just did this in my own gem installation path (~/.rvm/gems/…) but later on I wanted to just pull that library into my project until I can figure out a way to get it working without patching.
Initially I tried just copying the gem locally to my project and “require”-ing the code directly. It seems like this should work but I was always getting file load errors like this:
/Users/dan/.rvm/rubies/ruby-1.9.3-p374/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require': cannot load such file -- rb-readline (LoadError) from /Users/dan/.rvm/rubies/ruby-1.9.3-p374/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
So the next thing I tried was using a Gemfile to specify the local location:
gem "rb-readline", :path => "./rb-readline-0.4.2"
This resulted in errors that the source could not be found:
Could not find gem 'rb-readline (>= 0) ruby' in source at ./rb-readline-0.4.2. Source does not contain any versions of 'rb-readline (>= 0) ruby'
After looking around a bit I finally read the Gemfile man page:
Similar to the semantics of the :git option, the :path option requires that the directory in question either contains a .gemspec for the gem, or that you specify an explicit version that bundler should use.
So my final gemfile looked like this:
gem "rb-readline", "0.4.2", :path => "./rb-readline-0.4.2"
Now bundle install worked. But I was still not able require the code because I forgot the following in my code:
require "rubygems" require "bundler/setup"