Live Desktop Streaming via DLNA on GNU/Linux

TWiT and the Ubuntu terminal on a TV set via DLNA

Many modern TVs (and set-top boxes, gaming consoles, etc.) support DLNA streaming. Suppose you have a PC that stores all your music, downloaded podcasts, video podcasts, photos, and so on. You can run some DLNA media server software on your PC and stream your entire media collection to your TV over your home network. No more carrying around USB sticks, it’s all in your home cloud.

On GNU/Linux, I am using MediaTomb as my DLNA server. It’s nothing fancy (it’s a file server, after all), and it just works.

Okay, this takes care of media files stored on your PC. But can we do more? Is it possible to stream a live capture of your desktop to the TV?

Let’s say you’re watching a Flash video in your browser, and there’s no way to download the video file. Or, you’re watching a live event being streamed via Flash or whatever. It would be kinda cool to be able to stream that to your TV via DLNA. And it’s possible—not trivial, mind you, but I’ve seen it working at least once…


The same approach that’s taken here for live streaming might also be useful for on-the-fly transcoding (e.g., an .ogg file needs to be transcoded to .vob for the player to be able to read it).

(I should mention that something like this isn’t unheard of. In fact, my Philips TV came with the Windows-only(!) WiFi MediaConnect software to do desktop streaming. I have never seen it in action, because I don’t use Windows. Of course, Philips based the TV’s firmware on the Linux kernel, like so many other manufacturers do. But, also unsurprisingly, they use it because it’s “free as in beer”, not because they care about users’ freedom. The fact that I could still get all this to work on GNU/Linux is thanks to the invaluable work of many free software projects: MediaTomb, FFmpeg, Matroska, FUSE, Python, to name just a few.)

What you can find here

I wrote a couple of scripts that allow you to capture your desktop and stream it to a DLNA-capable player. To use the code as-is, your player must support the Matroska (.mkv) file format.

I used the scripts in conjunction with MediaTomb, but other media servers should work just as well.

The scripts are very rough at the edges, and if you are afraid of the command line or of reading Python code, you shouldn’t attempt to use them.

Download the scripts from GitHub

See Usage Instructions below.

What is missing / Invitation to contributors

As I mentioned, the scripts only work with devices that can play Matroska files via DLNA right now. The basic ideas and concepts should apply equally as well to MPEG-2 and other formats. I’m not sure about MPEG-4 files, though, and I would appreciate feedback from someone more familiar with the format.

The scripts could definitely use better error checking, a nicer command-line interface, and lots of testing. If anyone’s interested in helping with that, please contact me.

Usage Instructions

Note: I am in the process of updating the scripts to work on the newest Ubuntu and to be more user-friendly. You can follow my progress here.

  • Make some changes to to specify the temporary file, control the captured display region, set the output format, etc.
  • Configure MediaTomb, as described below.
  • Mount with “python -s -f fuse_mnt”. This automatically starts the capture.
  • At this point, add the file “fuse_mnt/a/fuse_live.mkv” to MediaTomb’s database. (You only have to do this once.)
  • Start playback.
  • As this is more of a proof-of-concept than a polished tool, please read on below and let me know if you have any feedback.

The Basic Idea

First off, how do we capture the desktop? Someone named Verb3k explains this in “How to do Proper Screencasts on Linux Using FFmpeg“. Here’s an example:

ffmpeg -f alsa -ac 2 -i pulse -f x11grab -r 20 -s 1024×576 -i :0.0+128,224 -acodec ac3 -ac 1 -vcodec libx264 -vpre fast -threads 0 -f matroska ~/Videos/capture.mkv

This command line takes sound from PulseAudio and screen images from X11 (at 20 fps) and combines them into a Matroska file using the H.264 codec for video and AC3 for audio. It grabs a rectangular area of 1024×576 pixels, 128 pixels from the left edge of the screen and 224 pixels from the top edge of the screen.

Now, what happens when we have MediaTomb serve up the file capture.mkv to the player while the file is still being captured? If you are luckier than I was, it might just work, and you’re done. Maybe you can find some other combination of video codec, audio codec, and container file format that your player likes better. (Before attempting to do live streaming, you should have ffmpeg convert an existing video file, and finish the conversion before starting playback, in order to find a format that your player understands.)

Starting the playback while the capture was still in progress didn’t work for me. Instead, when starting playback too soon after capturing had begun, the player would simply tell me the file was unplayable. When waiting for a few minutes, the player would play up until the point where I had started playback. That is, when I started playback after having captured for five minutes, playback would stop after five minutes, even if the captured file contained 10 minutes at that point.

You probably have an idea already why this might have failed. Let’s take a look at what the file contents might look like while the capture is in progress.

A video file after 2 minutes of capturing, after 5 minutes, after it is complete

In the figure, you can see a hypothetical file format that stores the data length and video duration in the front, then the video data, then a table containing seek information and other stuff that’s only known after encoding has finished. This particular encoder seems to update the duration field periodically while encoding is in progress. On the other hand, it leaves the data size field blank until it has finished.

This hypothetical case shows many things that can go wrong when playback starts in the middle of encoding (which, in the case of live streaming, basically means at any time):

  • The player might encounter the “unknown” size field and decide that the file is broken.
  • Or, it takes the “unknown” size as an indication that it should seek to the end of the file and determine the size itself (which breaks, because the file has no end yet).
  • The player might read the duration of 2:00 min. and never look at it again—playback will simply stop after 2:00 min., no matter what happens to the file in the meantime.
  • The player might know that there’s supposed to be a seek table, a list of keyframes, a checksum, etc., at the end and fail when it tries to read it.
  • <endless other complications>

The scripts that I wrote try to circumvent these issues in two steps:

  • Modify the file that ffmpeg produces during the capture so that the file appears to be a regular, albeit very, very long, video. Give the player all the information that it needs right away, so that it does not try to seek (through the media server) to various places in the file, searching for the information.
    • For Matroska files, this is what “” does. Hopefully, it will be possible to write filters for other container file formats in the future.
  • Intercept calls to the filesystem, so that when the player (through the media server) tries to access parts of the file that don’t exist yet, we can wait for ffmpeg to produce them (if it’s just video data for 5 seconds in the future, for example), or come up with fake data.
    • This is what “” does. This is a virtual (FUSE) filesystem that simply waits for ffmpeg to produce more data when the media server tries to prefetch more data than is available in the file.

Matroska is a container format that lends itself well to this kind of interception. Matroska has streaming support (i.e., it defines what should go into the headers so that players know it’s a live stream), but unfortunately, my particular player didn’t care much.

There are other container formats where I’m not sure that such a thing is possible. In MP4, for example, there are atoms like “stsz” (“sample table sizes”) and “stss” (“sample table sync samples”) that seem to go before the video stream and that contain information about the encoded sizes of frames—I’m not sure there’s a way to fake this data without waiting for the encoding to finish. If you are familiar with the MP4 or QuickTime formats and have an idea, please leave a comment!

Filtering Matroska for Live Streaming

The Matroska specification points out that a live stream is designated by setting the “Segment” size to “unknown”. ffmpeg does this, but it didn’t convince my TV to treat the file as a live stream. Instead, I ended up simply setting the size to a very large value and setting the duration of the video to 100 hours.

In addition, I suppress “SeekHead” elements (pointers to other sections of the file) and “Cues” elements (pointers to keyframes for fast-forwarding). This isn’t strictly necessary; ffmpeg only produces these elements when encoding finishes (which it never does with a live capture). However, this functionality was quite handy when testing out the DLNA streaming with existing .mkv files.

As a final hack, I output 128 KB of “Void” data after each “Cluster” (which appears to be a block of ~5 seconds of audio/video data). The “Void” data doesn’t serve any purpose other than being able to send data to the player when it requests some. The player pre-fetches data. Without the “Void” blocks, there is sometimes not as much data as it requests, because ffmpeg hasn’t produced it yet. If the requested data can’t be delivered fast enough, though, the player appears to give up. By producing the “Void” data, there is always enough data to satisfy the player, even though the data doesn’t contain anything useful.—At least, that’s what I think is happening…

All this is done in the Python script “”.


  • python <mkv-filename> : Output a pretty-printed tree of the Matroska file structure. (Similar to the mkvinfo tool from the mkvtoolnix package.)
  • python - : Read a Matroska file from stdin (maybe the output of ffmpeg writing to stdout). Writes the modified Matroska file to stdout on-the-fly, i.e., it writes data as soon as it becomes available and doesn’t wait for the input to end.

As an example, to filter an ffmpeg-produced live stream, write this:

ffmpeg -f alsa -ac 2 -i pulse -f x11grab -r 30 -s 1024×768 -i :0.0 -acodec pcm_s16le -vcodec libx264 -vpre fast -threads 0 – | python – >~/Videos/filtered_live.mkv

(I tried to use libebml and libmatroska in C++ first. However, documentation was hard to come by, and the code wasn’t quite self-explanatory. I found a Matroska tag reader written in Python by Johannes Sasongko and built the filter based on that.)

FUSE Filesystem to Fool the Media Server

When you’re using GNOME, chances are you’re using FUSE filesystems already. When you use “Places”→”Connect to Server” to connect to an FTP server, for example, the remote server appears as a local folder in ~/.gvfs. This is a virtual filesystem that uses FUSE.

For the DLNA streaming, I decided to write a FUSE filesystem in Python. This filesystem would appear to MediaTomb as a regular directory containing a Matroska video file. Whenever MediaTomb would access the file or read parts of it, my Python code could intercept these calls and do its magic.

When the filesystem is mounted (i.e., when the Python script is started), the desktop capture is started and redirected to a temporary file. When MediaTomb (or any other program) requests some part of the file, the script can check whether there’s enough data in the file. If yes, it simply returns the data. If not, it blocks briefly until ffmpeg has written enough data. If the player tries to read too far ahead, this might indicate that the file isn’t suitable for live streaming yet, and the script will log an error. (This shouldn’t happen for Matroska files anymore, but it will be useful when trying to add support for more container formats later.)

The FUSE filesystem is in “”. It requires the Python bindings for FUSE (package “python-fuse” on Ubuntu).

You should make a few changes to the file to adapt it to your needs:

  • Change the variable TEMP_FILE. While ffmpeg captures the desktop, the resulting video is not kept in memory but written to this file. This means, you need some free memory on your hard disk while watching live streams. Of course, the whole purpose of the FUSE filesystem is that the file doesn’t need to exist physically. At a later point, I will change the FUSE script to keep only a part of the ffmpeg output in memory, and discard older parts once the player has read them. For now, the temporary file is used as a buffer.
  • Change the ffmpeg command line. The current command corresponds roughly to this:
    • MONITOR=$(pactl list | grep -A2 '^Source #' | grep 'Name: .*.monitor$' | awk '{print $NF}' | tail -n1)
      parec -d "$MONITOR" | ffmpeg -f s16le -ac 2 -ar 44100 -i - -f x11grab -r 20 -s 1024x576 -i :0.0+128,224 -acodec ac3 -ac 1 -vcodec libx264 -vpre medium -threads 0 -f matroska - | python matroska_live_filter - > ~/Videos/live.mkv
  • Instead of using “ffmpeg -f alsa -i pulse”, which produced crackling noises every now and then, I use “parec” (PulseAudio recorder) to capture the audio. The part “-f s16le -ac 2 -ar 44100” is the format that parec produces (at least for me): 44 kHz, 16-bit stereo. “-r 20” instructs ffmpeg to capture at 20 fps. I chose “-s 1024×576 -i :0.0+128,224” to capture a 1024-pixel-wide rectangle with an aspect ratio of 16:9 at the center of my screen, which is 1280×1024. You can change this to whatever suits you (as long as your computer can encode it fast enough). “-acodec ac3 -ac1” converts the audio to the AC3 codec in mono (the TV had problems with stereo AC3 streams). “-vcodec libx264 -vpre medium” uses the “medium” profile for the H.264 encoding. “-vpre” can also be “fast”, “ultrafast”, “lossless_ultrafast” and lots of others—you need to experiment to find an encoding profile that provides good quality, yet doesn’t overwhelm your CPU or network.
  • Note: If “parec” doesn’t record anything, open the “PulseAudio Volume Control” (installed with “sudo apt-get install pavucontrol”) and make sure that on the “Input Devices” tab, the device named “Monitor of Internal Audio Analog Stereo” isn’t muted.
  • Make a directory for the mount point and mount the filesystem:
    • mkdir fuse_mnt
      python -s -f fuse_mnt
    • “-s” means single-threaded (just in case my implementation isn’t entirely thread-safe), “-f” means foreground (so that you can see log output on stdout).
  • To test it, you can point Nautilus (assuming you use GNOME) at the fuse_mnt directory and play the file fuse_live.mkv that you find in there using a player of your choice.
  • Add the file “fuse_mnt/a/fuse_live.mkv” to MediaTomb’s database.
  • Note: I start “mediatomb” manually from a terminal, which works just fine. The MediaTomb service that’s started automatically during boot, on the other hand, can’t see the file “fuse_live.mkv” due to permission problems–I’m not sure why.
  • To stop the FUSE filesystem, run “sudo umount fuse_mnt”. If this doesn’t work, you can also kill the process:
    • ps aux | grep
      kill -9 <pid>
  • If starting fails with “bad mount point: Transport endpoint is not connected”, make sure the process has been killed and run “sudo umount fuse_mnt” again.

Configuring MediaTomb

MediaTomb web UI: Ugly but functional. And ugly.

Install MediaTomb via your package manager (package “mediatomb” in Ubuntu). In Ubuntu, MediaTomb is started automatically as a service. The configuration file is in /etc/mediatomb/config.xml.

I prefer to start mediatomb manually whenever I need it and place the configuration file in ~/.mediatomb/config.xml.

Provided that the config.xml contains “<ui enabled=”yes”>”, you can open the MediaTomb GUI in a web browser at http://localhost:49152/ (or a subsequent port number). Once the FUSE filesystem is up and running, add the file fuse_mnt/a/fuse_live.mkv to the database. At this point, you should be able to find it and play it back on your DLNA player. (Of course, it can’t hurt to try a regular file first to check whether it works at all.)

How well does it work?

The image quality is astonishing, even with the “fast” encoding profile that I am using. The fonts and window details are very crisp. You have to look very closely to see the typical MPEG compression artifacts, if you can see them at all.

I am not trying to watch HD movies this way. I mostly use this for the TWiT live stream and similar talking-heads programs, so the 20 fps that my PC can deliver are good enough.

The latency is currently whatever it takes me to start the FUSE filesystem (which automatically starts the capture), walk over to the TV, and start playback there. I think if I would delay the capture until the file is actually accessed, I could reduce latency to just a few seconds (although I do think it’s a good idea to give the encoder a head start). Maybe it will be possible at some point to reduce the latency far enough to be able to remote-control the PC and get feedback almost instantly. We’ll see.

If the video hangs, try the obvious things: Reduce the frame rate, make the captured screen area smaller, lower the bitrate, etc.

Tested Hardware and Software

I tried all this on Ubuntu 10.10 (Maverick) with MediaTomb 0.12.1, FFmpeg 0.6, and Python 2.6.

The TV is a Philips PFL 7605H/12 with firmware (As far as I can tell, models 8605 and 9705 use the same firmware, so they might work as well.)

If any of you can successfully replicate this on other player models and brands (or even if you can’t), please leave a comment.

Request for Comments

Again, this is a proof-of-concept. I hope to expand and improve the scripts in the future. If you have any questions, suggestions, or comments, please leave a comment, contact me via,, or e-mail.

Figure source


42 thoughts on “Live Desktop Streaming via DLNA on GNU/Linux”

  1. Darn, I’m guessing this *won’t* work with a PS3 as the player as I’ve yet to see a single MKV file that the PS3 could even thumbnail.

    Will definitely keep it bookmarked though, thank you!

  2. @DarwinSurvivor You could try encoding to a .vob file (maybe name it .mpeg). The PS3 might be able to play that right away.

  3. Awesome. was thinking about this the other day. I’ll give it a shot and post back with comments.


  4. Good solution!, I try with partial success. Initially, the TV refuse to show the file, because the length. I reduce this from 100 hours to only 3. Additionaly, I reduce the initial chunk of data captured (maybe this is the problem, I try tonight). With this changes, almost see the first image snapshot (one or two seconds), wonderfull!, after that the TV keep showing “loading…”. But all the solution, with the inline filter and tweaks in fuse, is awesome, congrats!.
    My try is in Windows 7, using a coLinux small debian installation running the mediatomb, your code, and a X server running with only a vnc client.
    The general idea is set a VNC server sharing my notebook secondary display, the coLinux X server (empty, without window manager or else) with a vnc-client (connected to the secondary display, in view-only mode) shared with mediatomb, and finally getting the TV showing my notebook secondary “monitor” wireless ;).
    Thanks for share your knowledge, is very helpfull!. Excuse my english,

  5. Is this similar to how apps like imediashare in the android market work?

    I’d love to be able to replicate how that app works on my desktop, especially being able to share content located elsewhere (youtube, etc).

  6. @dan iMediaShare appears to be a DLNA server, yes.

    MediaTomb on GNU/Linux can share videos stored on your PC, and it can also download and transcode videos on-the-fly. For this, you don’t even need to set up a live screen-capture (live capturing is useful if you can display a video on your PC but not download it).

  7. @Michael thanks, but mediatomb doesnt allow the remote control part that I’m really looking for. I’m trying to find a way, from my linux desktop, to:
    1. specify source file (local,remote dlna server, youtube, shoutcast, etc)
    2. specify dlna target (tv),
    3. control playback volume, etc

    Any ideas?


  8. Wow it’s incredible article! Thank you so much.
    I want to ask how can we or which programs can we use for “Live Desktop Streaming via DLNA on Windows?” 🙁

  9. I used this with twonky and it worked ok. One annoying thing was I had to rebuild my media database every time I changed the ffmpeg settings which I didn’t realize until I couldn’t get my tv to play the video correctly even though the video would stream fine to my own desktop. The reason for this is my tv doesn’t support mkv straight up (Samsung 6 series) so twonky creates fake meta data for mkv files but apparently only on initial media scan.

    My other unfortunate issue is that my TV demands about 40 seconds of buffer before it will start playing which doesn’t work for my original intention of playing video games in my living room.

    Oh, make sure you don’t have spaces in your path to the scripts. They won’t work with spaces and give confusing errors.

  10. Great article. I’ve been looking for something like this for some time. Unfortunately, I couldn’t get it to work exactly like this using the fuse module (I think my player is very picky about streaming files), but I’ve taken some of the ideas from here and come up with something that works with my player.

    Now, I’m using Mediatomb’s transcoding features to achieve live(ish) desktop streaming. If anyone is interested, the basic outline of how to do it is this:

    (1) Create an empty file, desktop.dtp (or anything you like) and make sure it’s in a directory that mediatomb scans.
    (2) Configure mediatomb to map the dtp extension to some fake mime type (I called mine “video/desktop”).
    (3) Set up a mediatomb transcriber that works on mime types of video/desktop. The implementation is just a very similar command to the ffmpeg command above:

    ffmpeg -f alsa -ac 2 -i pulse -f x11grab -r 20 -s 1024×576 -i :0.0+128,224 -acodec ac3 -ac 1 -vcodec mpeg4 -threads 1 -f avi -y %out

    I couldn’t get it to work with piping from parec or using mkv format. This seems to work for me, though I’m still tweaking the settings. Mediatomb will replace %out with it’s own output file, which will actually be a pipe. There is also %in, but I’m ignoring it for this purpose.

    I think this works better for my particular player, because Mediatomb uses different settings to stream transcoded jobs rather than files (chunk encoding etc), which seems to stop my player waiting for the entire file or trying to seek. I don’t want seeking with live streaming anyway. The only bad point seems to be that the “transcode” job never finishes, so even when I turn off my player, ffmpeg is still going.

  11. Hi Mike,

    Like your solution too, thanks for posting it
    did you resolve the issue of FFMPEG continuing?

  12. @Andy I didn’t bother to change anything yet. I usually just “umount” the fuse filesystem when I’m done watching, which stops the capture as well.

  13. That looks great. I got it working so far (it took some time to find out that I had to change the pactl line as I am using a German Ubuntu -Natty- and the output is in German language Source = Quelle).
    The only problem I am facing: Mediatomb does not find the fuse_mnt folder anymore when it is mounted. starting dlna_fuse changes the ownership to root. Do you have any suggestion for a solution?

  14. Weird. Are you running with sudo? You shouldn’t have to. I usually start it with “./ -s -f fuse_mnt”. The permissions as displayed by “ls -l” are “drwxr-xr-x myuser myuser” and they stay this way after mounting.

    I was wrong there. The mount point ownership is indeed changed to “root:root”, although the permissions allow everyone to read the file. The reason it works for me is that I start “mediatomb” manually with my user account. It doesn’t work with the “mediatomb” service that’s started on boot.

  15. FYI I’ve had better luck with minidlna streaming to my Samsung TV than with mediatomb. IE, FF and REW work. Thanks for figuring this out…

  16. hey i was also working on samething like this for android but then i came up with the same idea as you why not capture video and stream it over dlna.i wanted to first try it in ubuntu and may be port the required modules to python for android and test it.i succefully captured my laptop screen using ffmpeg.i used th file writen by Nathan Vegdahl.i encounted the same problems as you if you click on the recorded video while it is still being recorded it play until the part you clicked on the you solved many of the problems,if you need any help email me me i am more than willing to join the project.

  17. i was planning to write an app which would take android frame buffer and covert it to a movie(avi or mkv)and stream it over dlna and upnp protocol.similar to airplay for i pad and mac,here is the link to the video(by the way i hate apple : apple can do it hell yaa linux can do, first i want to implement it on linux before porting it to android using scripting language for android(sl4a which has a python build for android know as py4a using that we can port any pure python modules to android).but , before that lets start an open source project for mirroring laptop screen.i am willing to join the project you worked on to extend the project to the next stage. what is your opinion.

  18. What a great and thoroughly interesting article, thank you very much for that!

    I’d been thinking of doing something like this for some time. What I’d really like, though, is closer integration between the software doing the screen output, and the software generating and streaming the H.264, for greater efficiency.

    For example, say I have a program which draws to a window (text, vector graphics, bitmaps, etc), and I wish to see its output on a remote DLNA client.

    You could, of course, use your approach, and have something continually scanning the relevant part of the frame buffer, looking for differences over time, encoding them into lossless H.264 and streaming them to some client.

    But, instead, what if there was something intercepting the drawing commands, and generating H.264 frames directly, at the time that the changes occurred, knowing what they actually were (instead of having to scan for them).

    I’m a Linux newb, so I might be talking absolute rubbish, but I imagine I’m talking about some sort of X driver that “renders” X calls directly to an H.264 output stream rather than a physical screen/frame buffer. That way, any app could be streamed.

    Does anyone know if there is such a project underway anywhere?

  19. @amit raj The AirPlay video looks really cool. I wonder whether it’s DLNA-based, though. It requires an Apple TV box to stream to, right?
    I do not currently have any plans to work on the scripts, as they work more or less fine for me. However, I do agree that there is a lot to do to make them more useful. What exactly would be the things that you’d like to work on? I will gladly help you get started, if I can.

  20. Hi

    I really need some help. I have recently purchased a Lg smart tv upgrader for my HD lcd Tv. I can now share media between my laptop, tv and samsung galaxy tablet using dlna. Now I want to be able to view my laptop desktop on my TV wirelessly, Is that possible? If so please give me some simple instructions. Thanks in advance

  21. thanks. nice article. successfully replicated on ubuntu ps3mediaserver/tsmuxer with the following ffmpeg parameters for video only. very close to a dlna remote desktop here.

    -f x11grab -r 24 -s 1600×1080 -i :0.0
    -vcodec libx264 -level 51 -vpre normal -crf 24
    -threads 0
    -f matroska pipe:1
    | dlna_live_streaming/ –
    > capture.mkv

  22. Been trying to get this to work with Android devices (Honeycomb tablet and Google TV). While I can play the fuse mounted file directly with mplayer, the android devices both immediately just fail to play the mediatomb shared video. Whether or not it’s an android issue or a mediatomb issue, dunno.

  23. Yea, this is just what i’ve been looking for. except for the OS. i would like to do this in windows.
    i’ve been doing my research and turns out that there are not much people intersted in streaming they computer view over tv via dlna. they are more interested in using the computer as storage.
    but it seems that the phones already make this. people want to see their phone in big screens.

    i just think that the media servers should have this functions by default. it would be a good leverage.

    philips is smart.

    does anybody know any kind of software that can do that job for me?

    i have a Panasonic VIERA – TC-L42E30B


  24. Hello!

    Have anybody managed to configure mediatomb and/or Philips TV to get fast-forwarding/rewinding/absollute positioning working (on MKV-files)? It seems it’s the biggest problem with DLNA servers, at least on Linux.

    Guilherme: you may try coherence (, it’s python-based and thus [hopefully] cross-platform.

    By the way why don’t one use an HDMI cable connection to display a live picture formed on a computer? The main con of such solution is that it makes TV remote control almost useless (only volume can be adjusted). But this is the main pro too because thus you can avoid all TV firmware-specific [mis]features, and VLC (or MPlayer) rules them all.

  25. @Alexey At the time I did this, I didn’t have a computer anywhere near the TV and I didn’t want to pull any cables. In the meantime, the old laptop that I used for DLNA streaming sits right next to the TV–it has become my media PC. So I’m not really using DLNA streaming any longer.

  26. To get this to work with a samsung tv I had to manualy set the mime type of the file to video/mpeg (did this in both mediatombs config and in the ffmpeg command).
    Also had to remove the audio mono re-encoding.
    I also made sure I didn’t have mediatomb mark files I viewed with an asterix.

    Right now it lags about 1 minute behind what I’m doing onscreen. Is there a way to shorten this time?

  27. If you are having problems with getting MediaTomb to see the FUSE mountpoint, it may be because it hasn’t been given the allow_other option.

    I was getting <ERROR: Failed to stat /path/to/mountpoint, Permission denied in /var/log/mediatomb.log, even when everyone had read permissions.

    This happens because FUSE has a builtin security feature preventing anyone but the user which created the mountpoint from accessing it. To disable this feature, you need to edit /etc/fuse.conf as root and uncomment the user_allow_other line.

    Then add the allow_other option when starting, like -s -f -o allow_other /path/to/mountpoint.

    For a full list of accepted options, do -h (all of them are passed directly to FUSE).
    You may also have to start as root for it to be able to read /etc/fuse.conf.

    If you are using a locale different than English, like espe mentioned earlier, you can force pactl to produce English output by prepending LANG=en_EN.UTF-8, so the line begins with pulseaudio_monitor = os.popen("LANG=en_EN.UTF-8 pactl list......... That way you don’t need to change the grep commands no matter which language you normally use.

    I’m still not able to actually play any of the files yet. Clients on my phone crashes, my Samsung TV gives up and VLC complains about not being able to prefill the buffers.

  28. Hi Michael,
    Can you license your code with something like GPL3?
    The copyright text you added doesn’t allow to mix this code with anything GPL.
    I would like to add a frontend to this code, so anyone could use this functionality.


  29. You guys might be interested to note that it appears the UPNP protocol supports “live” files or files that are growing and do not have a fixed end point.

    GUPNP_DLNA_FLAGS_SN_INCREASE Content does not have a fixed end


    If this is the case, it would be much easier to implment streaming a live file in the UPNP server, rather than trying to fool a server to do so.

  30. This works fine on an old quad-core processor…


    avconv -f x11grab -s "$RESOLUTION" -r "$FPS" -i :0.0+$OFFSET -ab 192k -f alsa -ac 2 -i pulse -vcodec libx264 -crf 30 -preset "$QUAL" -tune "$TUNE" -vol 11200 -acodec libmp3lame -ar 44100 -threads 0 "$OUTPUT"

  31. Hi Michael,
    I’m on Ubuntu 14/04, your script doesn’t work for me.
    I have just change ‘cmd’ to not capturesound.
    It terminate with code 127:

    $ ./ -f fuse_mnt/
    Using PulseAudio monitor alsa_output.1.analog-stereo.monitor
    Running capture command: avconv -f x11grab -s 1280×720 -r 30 -i :0.0+0,0 -vcodec libx264 -crf 30 -preset ultrafast -tune animation -threads 0 -f mastroska – | /home/rcspam/Bureau/Scripts_to_test/ –
    read 1048576 0
    need to wait for file size 1048576 have 0
    Process has exited with code 127Process has exited with code 127
    Capture thread has exited
    getattr /.Trash
    readdir / 0
    getattr /.xdg-volume-info
    getattr /autorun.inf
    getattr /.Trash-1000
    getattr /a
    getattr /b
    getattr /c

    I don’t see why it fails …

  32. Hi Michael,
    I’m on Ubuntu 15/10, your script doesn’t work for me.
    When I try to start script:
    $ ./ python -f fuse_mnt

    I have this message:
    Traceback (most recent call last):
    File “”, line 30, in
    import fuse
    ImportError: No module named fuse

    What should I do?

  33. @rcspam
    You can run the cmd at in bash to see the real problem. It’s most likely because of missing libav-tools.

Leave a Reply

Your email address will not be published. Required fields are marked *