Windows XP running inside Ubuntu Linux and MacOSX

June 13, 2006 at 12:44 pm

VMWare screenshotI was googling around for a solution to why the fonts on Ubuntu Linux are stretched on my Dell 20″ monitor, and the search words “widescreen fonts stretched ubuntu” brought up this page as the first hit.

Well, J Wynia’s blog post didn’t really solve my problem with the fonts, but it did remind me that VMWare recently made their VMWare Server software available free-of-charge, presumably in response to attention received by the open source Xen virtualization software.Thanks to the free VMWare Player, I had been running Windows XP on my Ubuntu Linux box, but after I recently upgraded to the latest Ubuntu Linux 6.0.6 (codename “Dapper Drake”), the VMWare player stopped working. I decided to try updating to the VMWare Server to see if that would solve the problem.I downloaded the VMWare Server software and installed it using ./vmware-install.pl.

This brought up a series of questions to which I replied the default. When I got to the question:

It wouldn’t accept the default response and said:

I googled for that exact text, and this post to the Ubuntu forums came up which described how one must install the headers for the release of the kernel you have installed. (this probably explained why VMWare Player stopped working when I upgraded – the kernel headers had changed).

I typed uname -r to find out what version headers to get, and then just passed this in to avoid typos.

Then when the question about the C header files came up, I typed in the following:

If you are trying this at home, you would substitute 2.6.15-23-386 for whatever version you have installed (output of the uname -r command).

After this, the install proceeded along and the very last step was to move my virtual machine file into the directory /var/lib/vmware/Virtual Machines.

I fired up the VMWare Server console, loaded my virtual machine and behold, Windows XP running inside my Linux box!

MacOSX solutions – Bootcamp and Parallels

My days running WinXP on Linux may be numbered because on thursday I’m expecting to receive a new Macbook which sports the new Core Duo chip. Using Apple’s Bootcamp software, I can run both MacOSX and Windows natively on the same machine.

However, because it’s a drag to have to reboot, I will probably run Windows in a virtual machine using Parallels. With the Core Duo processor, this will be faster than running it with VMWare on Linux.

Boston Macintosh Users Group

I’m looking forward to the BMac users group meeting tomorrow night because the topic is Windows on Intel Macs. Hopefully learn some tips and tricks, and also try to unload some old Mac gear that I’ve been trying to get rid of. Anyone want an Epson Inkjet printer or a Yamaha CD burner?

Technorati Tags: , , , , , , , , ,

Macbook ships three days early

June 12, 2006 at 9:12 am

Fedex Macbook ShipYay, the Macbook shipped 3 days earlier than expected. This means that I should get it before I leave for San Francisco! Now I need to go find out why the RAM and HD haven’t shipped yet.

To prepare for the upgrade, I’m going to clean up my Powerbook 17″ HD and figure out what apps I really need to install on the Macbook. Expect a short list to be posted within the next couple days.

Update: while the Macbook is shipping earlier than expected, it appears that it’s still not scheduled to arrive until June 19, one day after i leave for San Francisco. ๐Ÿ™ Anyone know how to request FedEx to reroute an item, or hold an item?

Technorati Tags: , ,

Scraping a jazz events calendar

June 12, 2006 at 12:48 am

As mentioned in my last post building a live music calendar, I’m disappointed that the websites that list jazz events in Boston don’t offer the data as an RSS or iCal feed. One example of this is the WGBH Jazz Calendar, which has probably the most comprehensive listing of jazz events in the Boston area.

In my talk about Plone4Artists at EuroPython 2005, I mentioned a tool called Scrape ‘n’ Feed, which will scrape a website and generate an RSS feed. Well, it’s been a year since I first discovered this tool, and now I’m revisiting it to see if I can make it work. Here is my first foray into this scraping business.

ScrapeNFeed depends on Beautiful Soup and PyRSS2Gen which are easily installable on Ubuntu Linux with:

Once I installed these two packages, I downloaded the ScrapeNFeed.py script and created the following file ‘getwgbhfeeds.py’:

Run the script with ./getwgbhfeeds.py and it will output a file wgbh.xml , which is in the RSS 2.0 format. You can then open this file using your RSS reader of choice, and view all the Boston jazz events.

Once thing that I noticed is that some of the items in the list have an extra <br /> which means the title doesn’t get read in correctly. I’ll have to find a way to ignore the <br /> which I sure will be fairly simple with BeautifulSoup.

What’s next

At the OPMLCamp a few weeks ago, I met Mike Kowalchik, the creator of grazr. After seeing this tool, I immediately thought about how useful it would be for generating a browseable directory of event listings. You simply supply grazr with an OPML file, and it will then display all the RSS feeds and their entries. After I get a couple more event listing sites scraped, I’ll generate the OPML file and try them out with grazr.

Mike also mentions on his blog about Tom Morris’ idea about using grazr to ‘kill myspace’ by creating a better way for independent bands and artists to self promote using OPML. Note to self: follow up with Tom to discuss this idea further. I love the integrated MP3 player in his grazr box. Update: left him an Odeo message.

Technorati Tags: , , , , , ,

How to Feel Miserable as an Artist

June 12, 2006 at 12:09 am

Remind yourself. From wish jar journal found via Michael Martine’s blog.
 Wp-Content Miserableartist

Technorati Tags:

Building a live music calendar

June 11, 2006 at 3:04 pm

While reading from Derek Siver’s O’Reilly blog, I came across Mark Hedlund’s talk Entrepreneuring for Geeks which described how the more technically minded can move into making companies of our own. He started out the talk with a set of proverbs.

The three proverbs that struck close to home for me were:

  • pay attention to the idea that won’t leave you alone.
  • build what you know
  • momentum builds on itself

Pay attention to the idea that won’t leave you alone

Several events have occurred in the past two weeks which have echoed these words in my mind.

During the BarCampBoston I spoke with other geek entrepreneurs about the problem of finding live music, and the guys from tourb.us told me about how they are scraping venue’s sites to get concert listings. They are providing a service that answers a particular need – when is my favorite band coming to town?

This triggered a memory of an exchange I had more than a year ago with trombonist Phil Wilson at the Jazz Journalists Association panel at Schullers Jazz Club. Jon Hammond organized a panel discussion on the topic of Boston as a Launching Pad for a Jazz Career. I asked the panel what kind of online tools or services could be provided to re-ignite the jazz scene in Boston. And Phil said that he would like to see a service that would notify him when a musician was going to be performing.

Then at the last Python meetup, Dan Milstein raved about the python scraping library BeautifulSoup and described how capable it was at scraping baseball scores off a website. I played around with BeautifulSoup awhile ago, but never actually built anything using it.

Scratch an itch

“Build what you know” affirms that the most basic advice of idea generation is to scratch an itch you have yourself. Now I have an itch to scratch. I love going out to hear live music, especially jazz – but there is no single site that aggregates the concert listings. There are several sites I must visit:

  • MyRootdown Improv Music Calendar is a great site built by graphic designer and improv enthusiast Shawn dos Santo. Shawn is doing a great job of posting events he hears about, but there’s no way for people to post their own gigs
  • The WGBH Jazz Calendar is good but again, it doesn’t have an RSS/iCal feed so I have to manually visit the site everytime I want to see who’s playing.
  • Each and every venue has their own concert listing page (Scullers, Regattabar, Wallys, Berklee, Reel Bar, etc.) and of course, none of them have RSS or iCal feeds.
  • I’m sure there are others that I don’t know about

The basic problem here is that there is a fragmentation of information. Since none of the sites publish their event listings in any sort of structured way (RSS, iCal, hCalendar), it’s tedious to monitor these listings and thus hard to stay on top of what’s going on in the Boston jazz scene.

The “Pull” method

Immediately after hearing Phil’s suggestion, my technical mind started churning as I thought about generating dynamic RSS feeds based on artist or band name, and then using something like Feedblitz to turn those RSS feeds into email notifcations. As much as us geeks would like to think it’s true, the average person still has no idea what an RSS feed is or how to use it. Email is still the lowest common denominator.

But the question still remains how to get the data into a system in the first place. It is not likely one can expect musicians to enter their gig listings themselves. And here is where Beautiful Soup comes in – if I scrape the event listing sites, I can put the data into a system, extract the metadata (band, location, date/time, cost, etc.) and syndicate these concert listings as RSS feeds and subsequently email notifications.

There is even a python script called Scrape ‘n’ Feed which will automatically turn a page scraped with BeautifulSoup into an RSS feed. This is why I love python – there is almost always a library that does exactly what you want. And there is also a python script to convert iCal into RSS.

The “Push” method

Now suppose for a moment that one could get musicians to enter their gigs into some sort of system, and what if you could offer a service, let’s call it GigBlast, which would push their gig information out to a bunch of event listing services: WGBH, eventful.com, upcoming.org, boston.craigslist.org, meetup.com, etc. using the API provided by those services or in the case of WGBH which has no API, use python libraries such as clientform to submit the form.

This would make it easier for musicians to get the word out about their gigs, give fans a tool to be informed when these musicians are performing, and ultimately get more people to go out to hear music which would create more demand for live music. Maybe I’m an idealist to think that it will have such far reaching effects, but even if no one else uses this service, at least I’ll be scratching my itch!

Momentum builds

Stay tuned for more thoughts on publishing events to the web using Apple’s iCal. This will simplify the data entry process even more as musicians can simply add their event info to iCal, and in the background it’s it’s transparently uploaded to their website and automatically pushed out to the event listing services.

I also want to explore the use of microformats, such as hCalendar, which I think have a better chance of being adopted among musicians, venues and bloggers since it is fairly easy to implement – just a few changes to the HTML template. Pages formatted with hCalendar are a breeze to scrape using Technorati’s events feed service and can be searched using Technorati’s experimental Event Search tool.

Well, after many days of sideways rain, the sun has finally come out in Boston, so I’m going for a jog in the Fens.

Technorati Tags: , , , , , ,

Bicycle taxis come to New York City

June 11, 2006 at 12:00 pm

200606111234New York City is finally getting bicycle rickshaws as an alternative to traditional motorized taxis. Copenhagen has had these for years (Cykel Taxi, Copenhagen Rickshaw) and they’re a great way to see the city.

It would be interesting to see if the wildly successful free city bikes program which was started 11 years ago in Copenhagen, would work in NYC.

“Taking the bikes beyond central city limits releases a hefty fine, but some of the bikes have reportedly been seen as far away as New York City and the French Riviera. One bike even made it on board Air Force One – a special gift from the city to President Clinton when he visited in 1997.” (Source: The Copenhagen Post 4/28/06)

Technorati Tags: ,

Analyzing traffic with Google Analytics and Feedburner

June 10, 2006 at 9:26 pm

Google Analytics

I signed up for Google Analytics right when I heard about it, and have been using it to monitor the traffic on several of my sites. I still use the excellent open source awstats because it is more configurable and offers a finger grain of control, but Google Analytics is a great way to get a snapshot of the traffic patterns on your site. Measuremap is also an interesting tool, but they are currently not offering new accounts. I’ve signed up to be notified when they start offering accounts again.

Feedburner

Tom Parish, an expert in search engine optimization (SEO), encouraged me to try out Feedburner, and I have to say, it is very impressive. Feedburner basically monitors how much traffic your feed is getting, and shows you how many people are subscribed to it. It can even tell you how many people clicked through to each blog post.

Feedburner Services

Feedburner

FeedBurner has something called SmartFeed which translates your feed on-the-fly to RSS or Atom, depending on what kind of reader the person is using. The BrowserFriendly feature renders the feed in a human-readable format, instead of the typical XML format which is virtually useless for humans. FeedFlare adds a bunch of quick links to each post to make it easier to bookmark the post and see other pages which link to the post.

The Photo and Link splicer tool pull in all your photos and bookmarks from Flickr and del.icio.us and merge them into your blog entries. I didn’t really want those cluttering up my blog posts, so I opted to leave this feature disabled. But I might create a new feed which includes blog posts, photos and links to get a “stream” of everything I am submitting. There is also something called Amazon ID burner which will auto-insert your Associates ID into any links to Amazon.com catalog items it finds in your posts.

Feedburner Feed Replacement (WordPress plugin)

I installed the Feedburner Feed Replacement which forwards all feed traffic to Feedburner while creating a randomized feed for Feedburner to pull from. This provides a permanent feed URL, so that if I ever decide to change my blogging software and the blog feed changes, I can just tell Feedburner about the new URL, and my subscribers don’t have to re-subscribe.

WordPress Reports

Wordpress ReportsI just discovered another plugin today called WordPress Reports which generates reports from Google Analytics and Feedburner. Here is a screenshot of the Google Analytics data, but I couldn’t get the Feedburner stats to show up because it said that my account isn’t enabled for Awareness API. Maybe I need to upgrade to the paid account?

Technorati Tags: , ,

Listening for messages using listen

June 10, 2006 at 4:03 pm

listen screenshotAlec Mitchell’s listen product is not only an excellent example of how to use the latest Zope 3 technologies in Plone, it’s also a very useful product.Listen is a mailing list management application that integrates into Plone. It’s based on Maik Jablonski’s Mailboxer product which has been around a long time and proved to be very stable.

PloneMailboxer was an attempt to integrate Mailboxer into Plone, but it hasn’t seen much development in over a year.

What I like about listen is that users can subscribe either as a user logged into the Plone site, or by simply providing an email address (while browsing the site anonymously). They can also post new messages and reply to existing messages through the Plone interface. All messages posted through the website are relayed to the mailing list, and vice versa.It took awhile to set it up all because it requires some configuration on the Linux server. Here is a summary of my experience.

1.Install listen

I grabbed the listen bundle and symlinked in all the products:

Note: I put all of my products in a global $PRODUCTS directory, usually in /usr/misc/zope/products
Then for each Zope instance, I symlink the products I need into that instance’s products directory. This way I only have to update the products in one place for all my Zope instances.

2. Setup a test instance

Restarted Zope and made a test Plone instance ‘mysite’. Installed listen and made a listen instance ‘mylist’
So the listen instance is now at: http://www.domain.com/mysite/mylist
The email address for this list will be: mylist@domain.com

3. Set up the smtp2zope script

4. Make the alias for the mailing list

Put the following in .qmail-mylist:

Note: This step took the longest time because I couldn’t figure out where to make the .qmail-mylist file. First I put it in /var/qmail/alias, but it wasn’t getting read from there. Since my box is serving multiple domains, each domain has it’s own qmail aliases. It also didn’t help that all the instructions for setting up Mailboxer assume that you are running postfix or sendmail. Luckily I found this howto which explained how to do it for qmail.

If you are using postfix or sendmail, then you need to edit the ‘/etc/aliases’ file and add the following:

Once you save this file, you need to run the command ‘newaliases’ to refresh the aliases. The README.txt explains all this pretty well.

5. Configuring the Mailhost

By default, Plone now ships with SecureMailHost. I wasn’t able to get listen to work with SecureMailhost nor the recommended MaildropHost due to the error described in this issue. I deleted the SecureMailHost object in the Plone root, and added a normal Mailhost object. I left the SMTP authentication fields empty, and then it worked. If anyone knows why it’s giving an sslerror, please respond to the issue in the tracker.

What’s next?

I’d like to migrate all the mailboxer archives from lists.plone4artists.org over to use listen, but I’m not sure how easy this will be. I’d also like to be able to search the messages in the archive. Currently, they don’t appear in the search results when I use Plone’s LiveSearch. Although, this might be intentional?

Technorati Tags:

Macbook on the way

June 9, 2006 at 3:35 pm

Macbook-1Well, after a week of research and drooling, I finally ordered the Macbook 13″ 2.0ghz. I decided to get the Macbook instead of the Pro because I didn’t really see much value in the dedicated graphics chip, illuminated keyboard and expresscard slot.

I ordered it with the stock 512mb RAM and 60GB HD, and I’m planning to upgrade it myself to 2GB and a 100GB 7200 rpm drive. I’ll repurpose the extra 60GB HD as an external drive for backups.

Apple’s upgrade cost of $500 for 2GB of RAM is just ridiculous when I can get it 2GB for $158 shipped from Omni Technologies, recommended by Alexander Limi and on this thread by several persons.

My only complaint is that the new Macbooks only come with Firewire 400, except for the 17″ which has the Firewire 800. After lugging around a Powerbook 17″ for 3 years, I decided my next laptop would be something more lightweight. For doing large sustained writes (such as audio/video capture), the Firewire 800 will definitely give better performance.

I read several articles and forum threads about the differences between a 5400 and 7200 rpm drive, and decided to go with the Seagate Momentus 100GB drive for $209 at Other World Computing. It won the Tech Report Editor’s choice as the best 2.5″ drive when compared with several other drives.

Even though it’s supposedly a bit slower than the Hitachi Travelstar, there were reports of the Travelstar being noisy, while the Seagate Momentus was described as whisper soft.

LaptopLogic did a full review and face off between the Hitachi 7K100 vs. Seagate 7200.1 and another review of a bunch of 2.5″ drives.

The latest shipping estimates from Apple.com report that the Macbook will be shipped on the 15th and arrive on the 20th. I hope it ships earlier than the 15th because I’m heading to San Francisco on the 18th and really would like to get it before I head out of town.

Technorati Tags: , ,

Blast from the past!

June 8, 2006 at 2:47 pm

Thanks to Sidnei da Silva’s handy python script to migrate Plone blog entries to MoveableType format, I was able to modify it for my own use and import 16 posts that were trapped inside a defunct Quills 0.8 blog into this WordPress blog.

Since there were no comments, I was able to greatly simplify the script. A few other changes that I had to make to get it to export correctly:

  1. Hardcoded the categories since they are all related to Plone and Plone4Artists.
  2. Added a condition to check that the object was actually a weblog entry and not a weblog topic, since with Quills 0.8, both are stored in the same container.
  3. Changed the method from getCookedBody to getBody, since that is how it’s defined in a Quills weblog entry.