Many modern TVs (and set-top boxes, gaming consoles, etc.) support DLNA streaming. Suppose you have a PC that stores all your music, downloaded podcasts, video podcasts, photos, and so on. You can run some DLNA media server software on your PC and stream your entire media collection to your TV over your home network. No more carrying around USB sticks, it’s all in your home cloud.
On GNU/Linux, I am using MediaTomb as my DLNA server. It’s nothing fancy (it’s a file server, after all), and it just works.
Okay, this takes care of media files stored on your PC. But can we do more? Is it possible to stream a live capture of your desktop to the TV?
Let’s say you’re watching a Flash video in your browser, and there’s no way to download the video file. Or, you’re watching a live event being streamed via Flash or whatever. It would be kinda cool to be able to stream that to your TV via DLNA. And it’s possible—not trivial, mind you, but I’ve seen it working at least once…
In my previous article, I gave an introduction to the built-in features of the Android platform for supporting screens of various sizes and densities. In this article, I am going to expand on this and show you actual code for achieving screen-independence in an app. My example will be a Processing app (as that’s my own primary use case), but the ideas should apply equally well to any game or graphics-centric app.
Let’s use the game “Chimpzilla Attacks!” as an example. I made up this game specifically for the purpose of this tutorial—and already spent way too much time on the mockup. I have no idea what the game mechanics are, but judging from the cheesy graphics, it’s got to be some kind of “Punch The Monkey” knock-off. Anyway, back to the tutorial…
In my ongoing effort to port a desktop Processing application to Android, I am now trying to add support for multiple screen sizes. The goal is that the same .apk works on, say, a 3.7″ Milestone equally well as on a 7″ Galaxy Tab and on a multitude of other screens. The Android Dev Guide has a comprehensive page that explains the platform features for supporting different screens. I did my own tests and experiments to better understand the concepts. This article explains my findings and hopefully saves you some time and work.
I added sound playback to the Processing app that I’m currently porting to Android. On the desktop, I used the Minim sound library, which is not supported on Android (although there is some discussion about alternatives and a possible port). All I really need from a sound library is a way to play several .ogg (or .mp3 or .wav) files simultaneously and asynchronously, query their playback durations, maybe set the playback volume of individual sounds, and stop playing sounds at any time.
In general, a sound can be played using the following code:
snd = new MediaPlayer();
My own app is a game that contains only a handful of .ogg files. I keep a HashMap of the MediaPlayer instances for all sounds in the game, with prepare() already called on them. Then I simply call start() whenever an event triggers the sound in the game.
Following up on my “First Steps” article, I did some more Processing development on Android. I ran into some problems, solved some, and was able to start my ported Processing program inside the emulator.
This article is still not about actual Processing source code. All I did was fiddle with the Android SDK tools some more.
Building Without Eclipse
Eclipse is kind of a resource hog (yeah, I know, “I told you so” just ain’t saying it). Therefore, I took a closer look at the command line tools from the Android SDK to replace it (at least part of the time).
To convert an existing Eclipse Android project to an Ant project that you can build from the command line, use the “android” tool from the SDK:
I got into mobile development recently. I ported an SDL-based game to the Openmoko FreeRunner and to an iPhone 3GS. Now I’m planning to do some Android development on a Motorola Milestone (the European version of the Droid).
I have previously been using the Processing programming environment on the desktop, so I was delighted to learn that it is now available for Android.
(“Processing is an open source programming language and environment for people who want to create images, animations, and interactions.” It won a Prix Ars Electronica award in the “Net Vision / Net Excellence” category in 2005.)
I am by no means an Android or Processing expert. This article serves mainly as documentation for myself about the steps to build a simple application. If it is also useful for others, so much the better.
Here’s how I created the panoramas that I posted to my GUADEC 2010 photo album (on Flickr and on Picasa).
I used the Hugin panorama photo stitcher and did some post-processing using GIMP. Hugin should be available in the repositories of your GNU/Linux distro, and it’s dead simple to use.
These are four pictures I took at the intersection of Grote Halstraat, Hoogstraat, and Gravenstraat in The Hague with a camera phone. I made sure that the pictures have enough overlap. (The Hugin tutorial recommends 20% to 30% overlap.)
New-style classes are part of an effort to unify built-in types and user-defined classes in the Python programming language. New-style classes have been around since Python 2.2 (not that new anymore), so it’s definitely time to take advantage of the new possibilities.