Dan Newcome, blog

I'm bringing cyber back

Scripting Pure Data

leave a comment »

Graphical programming languages like PD are great for high level tweaking and live modding of programs, but they all seem to suffer from the problem where things get tedious when building some simple low-level constructs like loops or building data structures.

Pure Data has always been extensible by creating your own externals in C, but there is another step in between building subpatches and building first-class externals and that is scripting using a dynamic language like Lua or Python.

Lua vs Python

There are two main scripting languages in use with PD. I’m sure there are others, but they aren’t very common. I was actually looking initially for a way to script PD using Javascript and I didn’t find anything. I’m considering building something that embeds V8 into PD in the future. Or maybe even… gasp… the .NET framework. The choice between Lua and Python came down to the ease with which the language embedding was implemented and how smoothly I was able to get things working, rather than my own choice of language (I was rooting for Python since I already know it pretty well).

Python

The first thing I did was try to install Thomas Grill’s famous pyext. The prebuilt binary for OSX was crashing for me and I couldn’t figure out why. I got the sources and built my own binary but I was still getting crashes as soon as I loaded the library in PD. I later realized that this was something with my installed version of Python. I had a later version of Python installed using homebrew. I’m not sure if maybe I had a 64bit version installed. The only 64bit version of PD for OSX right now is highly experimental and doesn’t include things like GEM. So I’m running a 32bit version. All externals need to be 32bit as well. So I’m wondering if that was part of the problem.

Anyway, once I reverted back to Python 2.7.2 that shipped with OSX pyext worked fine. I still have some crashes with some of the demo patches though.

Installing pyext involves dropping py.pd_darwin under ~/Library/Pd. Then I was able to open the demo patches from the source tree. Note that many of the patches require the library to be loaded already, which can be done using the -lib py startup flag or by simply instantiating the [py] object in any open patch. When it loads you should see some text printed to the PD console window.

Instantiation looks something like [pyext <script> <object> <args>]. As we’ll see this is a little more cumbersome than Lua.

Lua

Having never had experience with Lua, I was hesitant to try this out but I’m glad I did. Actually the only reason I did was that I couldn’t get Python to work right away. Lua is interesting in that it has only one built-in data type: the table. Fortunately tables can represent just about any other data type like arrays and maps. The code for generating PD objects is very concise, even more so than some of the Python examples, which is somewhat hard to believe. Also there are easy provisions for registering names directly to PD so that objects built using Lua can be instantiated without any sort of extra syntax, which is pretty awesome. Loading a single Lua source file can register any number of externals that can be instantiated as if they were normal PD abstractions. Read that line again and rejoice.

Installing Lua isn’t necessary, as it is already part of PD-extended. Simply instantiate [pdlua]. We can open a file then with a load message sent to [pdlua].

Going forward I’m going to be using Lua, mostly for its ease of registering names in PD and nice clean instantiation of abstractions built it in. I’ll cover some programming issues I encounter in later posts.

Written by newcome

December 29, 2013 at 9:56 am

Posted in Uncategorized

BASH shell navigation hacks

leave a comment »

I love the shell for the most part. Modern systems ship with the fantastic (for the most part) BASH shell, which includes tab completion, sophisticated line editing modes and many nice usability features (try using a shell without history or line editing support and see what a difference this makes).

There are still some places where the shell falls down in my opinion though (ahem), but it mostly has to do with just getting around the filesystem. I started writing some tools like g (go), url (open urls), and r (recent). I had an ugly tool for Windows that I called shelper (shell helper?) that I used to set up environment variables for Visual Studio and the C++ compiler (cl.exe) among other things.

These tools occupy an uncanny valley between simple configuration and actual applications/utilities. I’m never sure if these things belong in my dotfiles or if I should maintain them as projects. One also gets the distinct feeling of reinventing the wheel or just plain ignorance that you are just reimplementing something that already exists in your shell but you just don’t know what to even search in the docs.

Then the floodgates opened. I saw the j (jump?) command for fast directory switching. Which led me to fasd and then z.

I’m not sure what the granddaddy of all these commands is (pushd/popd maybe?) but they all have some similarities.

Going back to my own set of hacks, I don’t know if it’s better to just keep using my own stuff or try to convert to, say, fasd.

Recently I’ve been writing some blog posts with Jekyll, so I have directories of long files that have dates prepended to them. It’s tedious to do built-in BASH completion on these files, and I never got tab cycling to work in BASH for some reason (and I’m not convinced that this is really what I want anyway). So I thought why not reference files by number? I’m sure that one of these fasd clones will do this, but I have no idea which one.

So of course, I wrote a shell script to do it anyway. I give you lsn:

# lsn lets you select a file by number on the shell

if [[ $# == 1 ]]
then
	# print the file with number n
	ls | sed -n $1p
else
	# list files with numbers starting at 1
	ls | cat -n
fi

That lets me do something like this:

$ ./lsn
     1	2012-07-21-nanopad-teardown.markdown
     2	2012-07-30-iph-midi-looper.markdown
     3	2013-02-08-unboxing-the-keith-mcmillen-softstep.markdown
     4	2013-05-21-gridlok-drum-sampler-for-ipad-review.markdown
     5	lsn
$ vi `./lsn 4`

It’s questionable whether the backtick notation is more painful than copying the filename or muddling through tab completion though. Of course I could have a command called vin, or maybe have lsn take a the command as an argument.

Although this all seems incredibly lame, it serves to concretely describe a small bit of friction I encounter, to enumerate some others’ solutions (that I seem to forget or be unable to find again) and because I’ll probably misplace lsn at some point in the future or forget it even exists and this is the only way I’ll find it again (unless it gets bigger and I put it on github). One hack at a time (one hack in front of the other?). Or something.

Written by newcome

May 21, 2013 at 1:35 pm

Posted in Uncategorized

The Zen of Pure Data

with 2 comments

I have been mulling over the idea of writing a guide for Pure Data aimed at developers proficient in at least one other more mainstream programming language. Pure Data is so different than many other programming environments, yet its utility is such that I keep coming back to it despite the difficulties in managing code written in it using traditional techniques.

I’ve written a couple of posts in the past dealing with things like connection order and event processing precedence. However, the little oddities of PD run pretty deep, and I’m afraid that it is quite an involved subject to start in on.

Plenty of other books have been written on PD in the past, and I don’t think I want to write just another PD book. I think there has been enough written on general signal processing techniques and also on programming PD in general. There are some idiomatic things in PD that come up again and again, and I don’t think I’ve seen a concise guide for PD aimed at programmers that already code in other languages and already are familiar with signal processing in general.

Here is a rough list of topics I might cover if I write such a guide:

Processing order

Subpatching

Bang vs everything else

Lists, atoms and messages

Top down, left right (sort of)

Functional PD

That’s a good start. I’ll add to this list soon hopefully.

Written by newcome

May 14, 2013 at 5:22 pm

Posted in Uncategorized

Goodbye Posterous – a migration story

leave a comment »

Many of you know (or should know, if you have anything still on Posterous!) that Posterous is shutting its doors following its acquisition by Twitter. I was one of the first Posterous users in 2008, and they even gave me many more blogs than were usually allowed on the service at the time. Heady days, those.

Anyway Posterous turned out not to be the ideal host for my blogs, and I continued with WordPress. However, I still maintained a few specialty blogs there (Alewright, for one).

One by one I have been moving blogs to the open-source static blog software, Octopress, which I’ve been hosting on Heroku instances. However, now that Posterous is shutting down, I need to move the last few off, so I’m writing up this post to help anyone else that wants to do the same. Sure you can use their export tool to get a tarball of your stuff, but if you are lazy like me, and just want to get stuff over to Octopress, look no further than this ruby script.

I’m on a Mac, but I’ve used rvm to bump my ruby version up to 1.9.3. I installed the posterous gem using:

$ gem install posterous

Log into Posterous, go to the api page and get an API key by clicking on “view token”.

You need to know then name of your blog, the username and password, and the API key. Then run:

$ ruby posterous-export.rb username password apikey

I had to patch the Posterous gem to get things working. Otherwise I got this error:


/Users/dan/.rvm/gems/ruby-1.9.3-p374/gems/ethon-0.5.12/lib/ethon/easy.rb:234:in `block in set_attributes': The option: username is invalid. (Ethon::Errors::InvalidOption)
Please try userpwd instead of username.

Running the script gets you a file layout on disk including images and HTML-formatted post files, ready for use by Jekyll/Octopress.

To get the new Octopress blog running, just clone the repo and copy the images/ and _posts directories under the octopress/source directory.

I’ll do another post probably about working with/customizing Octopress so I won’t go into configuring Octopress here. Presumably the API shuts down on April 30, so don’t wait too long!

Written by newcome

April 27, 2013 at 7:54 pm

Posted in Uncategorized

Input paradigms for wearable computing

leave a comment »

touching_handsI’ve been tracking various input methods over the course of this blog, providing commentary on tablets, the death (and possible rebirth) of the stylus, touch computing and now with the Google Glass – wearable computing.

I hesitate to call the Glass a wearable computing device, putting it in the company of the clunky hardware of the past, but along with the new crop of smart watches, I think it’s still an accurate category.

Anyone who has followed the wearable computing space for a while will notice that most adherents use a device called a “twiddler” for text input. A twiddler is a one-handed keyboard-like device that allows the user to (rather slowly) touch type without having to look at the device or set it down.

Glass obviously doesn’t ship with a twiddler. Glass relies on voice commands instead. Of course there is nothing to prevent you from using the keyboard on your mobile device for text entry, but that hardly counts as a seamless heads-up experience that Glass promises.

We seem to have gotten used to people roaming the streets apparently talking to themselves when using hands-free devices, but are we ready for the full monty of the Glass camera pointing around and having people muttering to themselves all at the same time?

What about privacy of the wearer? Having to issue voice commands is hardly subtle in many environments.

Fortunately, the simple fact that Glass has persistent Bluetooth connectivity and a display can provide more feedback options than a simple twiddler. A system like Swype could work really well if the keyboard was projected to the wearer while input was received from the phone’s touch screen.

So several closing points:

  • It seems unlikely to me that many people are going to embrace an input method that requires a separate device or a new learning curve.
  • Most people are used to touchscreen keyboards by now, and most devices that are likely to be paired with the Glass already have them.
  • Tactile feedback can be replaced by visual feedback by virtue of the heads-up display with modifications to the keyboard software.

In light of these points, I don’t see the twiddler making its way into the new crop of wearable devices. For heavy lifting, there are plenty of Bluetooth keyboards around if you don’t mind looking like you are typing off into space. For everything else there is your phone (duh!).

If you read this far, you should probably follow me

Inset photo credit: Irene Yan

Written by newcome

April 24, 2013 at 1:46 pm

Posted in Uncategorized

Inner sourcing to open sourcing

leave a comment »

Ahmet Alp Balkan wrote an interesting piece on what you should open source at your company recently. I like his assertion that anything you’d likely need at another job should be open sourced. Some other influential programmers have asserted more aggressive stances on this, but I think Ahmet’s idea is a good start. You should check his article out now if you haven’t seen it already.

Many of my experiences with open sourcing code that isn’t purely a personal project have followed a trajectory of internal release and eventual open sourcing. I think even trying to decide whether or not some code is critical to your particular business is jumping the gun. I really took to (Github founder) Tom Preston-Werner’s readme-driven development treatise for these releases. If something was a concrete enough idea to put together a concise readme document, the project should be pulled out into something for internal release, even if it was only used on the current project initially.

I called this “inner sourcing” the project. I’ve since seen some other references to inner sourcing code so it seems I’m not the only one that thinks this way.

The process generally involved creating a module within the parent project, creating the readme and sending out an internal email to the company announcing the project. In the beginning it felt kind of silly to send out these announcement emails but eventually everyone started to get the idea of announcing these little internal projects.

When I was working for a small consulting company back on the East Coast, I created a small DSL (domain-specific language) for writing queries against Microsoft CRM called CrmQuery. When I came up with the idea I wrapped up the project in a separate repository and created a more general build system to build for several of the runtimes in use with our various clients at the time. I wrote the readme as if I were putting it out to the world and no one had any other internal context of our environment. I think that this is an important thought exercise and improves the quality of the project even if it never makes it outside of the company.

CrmQuery ultimately saved us tons of time and got used on every CRM project we did after I released it. When I put it on GitHub later I got some more feedback on the project that improved it even more. I get people commenting on my projects and filing issues through GitHub all the time. This certainly is much more helpful than having the code just sitting in your private repository.

Ultimately I ended up getting some other consulting clients from people finding out about CrmQuery and a CRM test double service I wrote called FakeCRM. There isn’t much better of an endorsement of open source than that!

Written by newcome

March 19, 2013 at 12:25 pm

Posted in Uncategorized

Migrating between Linux virtual hosting services

with one comment

I’ve been shuffling my sites around lately, canceling some virtual machines that I don’t use much and consolidating sites that get less traffic onto cheaper hosting. I’m mostly using Apache and MySql on these sites along with Node.js. I’m looking at moving to Nginx in front of the Node.js sites though.

Anyway, most of the work here is moving what is in the web content directories and my MySql database directories.

 

$ sudo service mysql stop
$ tar cf ~/mysql-bak.tar /var/lib/mysql
$ tar cf ~/www-bak.tar /var/www
$ tar cf ~/apache-config.tar apache2

On the new server we need at least MySql and Apache

# sudo apt-get install mysql-server
# sudo apt-get install apache2

I was able to copy my previous Apache configuration over from the old server and reuse it. I copied the symlinks for sites-enabled and mods-enabled, which was pretty nice.

I used to install node.js from source, but this time around I installed from apt. I figure Node is more stable now, so I’ll give it a shot. Same with NPM.

# apt-get install nodejs
# apt-get install npm

I had to symlink the nodejs binary in order to get it working with forever:

# ln -s /usr/bin/nodejs /usr/bin/node

However forever still isn’t working for me. It’s looking for daemon.js, which I installed using npm.

Error: Cannot find module './daemon.v0.6.19'

I had to grant all privileges on my MySql databases instead of just CRUD stuff like I used to. I’m not sure why this is yet.

All in all, moving a Linux VPS isn’t too bad if you can reuse most of the configuration. More on this later.

Written by newcome

March 1, 2013 at 1:27 am

Posted in Uncategorized