During the lecture, while proudly wearing a Creative Commons T-Shirt, it became apparent that Mohamed is an incredible multitasker. He was able to directly address questions that were asked in the chat window. One of those questions was asked by Julien:
Julien Dorra – 11:31: Q: about the video content, what tools would help you to better find it, analyse it and use itMohamed answered the question by mentioning the shortcomings of tools. Tools cannot cope with changes. For example, when Facebook changes (API, new features) or when a new social website appears (think of Google+), the tool stops working or becomes less effective. Still, there is a problem of too much information being out there.
One of the lecture's slides |
So how do we prevent journalists from drowning? How can we create tools that help them to navigate, search and browse a huge collection of documents? Fellow lab participant Juan Gonzalez is working on a dashboard that shows video summaries, allowing people to browse effortlessly through a vast library of videos. I'm not sure if Juan has thought about how he would generate video summaries, but we could generate these summaries using metadata coming from LikeLines, a technology as you might know I'm working on during this learning lab.
Talking about LikeLines, I've been working on a prototype and a storyboard for a video this week. I wish I could already show you the real prototype, but for now you'll have to do with a mockup I made:
Mockup that will guide me during the development of the first prototype |
I'll be focusing on getting the UI front end done first. The back end will be some server-side Python script that will be serving more-or-less static data. I'm taking this approach so that there will be at least something tangible, but it will also make the process of creating a video easier, since I can just use screen capture software to show LikeLines in action.
But there's still one thing I'm a bit worrying about and that's whether I'll succeed selling LikeLines to news organizations. Looking at this week's additional assignment,
Keeping in mind the objectives and challenges identified in this week's presentations by Shazna Nessa and Mohamed Nanabhay, how does your project take into account the need to facilitate collaboration in the newsroom (whether real or virtual), while acknowledging that team members will have varying technological skill sets?
I cannot answer this question, as LikeLines in itself will not affect the newsroom directly. Instead, it will be tools built upon LikeLines that journalists will be using.
Anyway, I'll end this blog post with a logo I've been designing for the LikeLines project. It took me a lot of iterations (because designing logos is hard). Feedback is appreciated. :)
Sketch of LikeLines logo |
Note: The following comment was copied from the P2PU website since the original Blogger comment was lost.
ReplyDeleteThanks for the mention! And yes... it seems we ought to figure out how our two projects work together. So here is my attempt: Originally, I did a lot of research on image manipulation and came across a number of algorithms that were capable of discerning sudden scene changes, but that implied I had to break the video into its frames, or at least do a good sampling. Once the scenes are identified, each one of them would be time-lapsed to a 5 sec snippet (remember the "twitter for video" concept?). These 5 sec would allow us to present up to 2 minutes scenes as long as the "coherence" among the frames of a given scene is high (just theoretical, because this would be very intensive to calculate, probably an O(n^2) problem. Finally, EVERY scene of the video would be showcased in the dashboard as a separate snippet. This means that a video may qualify as relevant only because ONE of its scenes was very popular.
So.... instead of all this really hardcore computation and image manipulation, it seems you have a crowd sourced approach which would allow some sort of "pre-processing" where certain power users would use LikeLines to explore and define what are the segments of the video that deserve to be promoted to the dashboard, right?
I like the idea of saving myself a lot of math work, but have to figure out what would make that part of the process scalable. Imagine getting hundreds of video submissions in a short timeframe... how do you get people to LikeLine them all? Am I missing something?
Let's keep this discussion going... please.
Note: The following comment was originally posted on the P2PU website.
ReplyDeleteHi Juan, could you try and repost your comment to my blog? I think Blogger ate your comment :X
Anyway, I'll try to reply here instead (and copy my reply to my blog when we somehow manage to get your comment posted there).
> where certain power users would use LikeLines to explore and define what are the segments of the video that deserve to be promoted to the dashboard, right?
It doesn't have to be power users. Everyone could use the LikeLines player and there are two different ways of "liking" a segment in a video. First is that of explicit likes by clicking the like-button. This action should record the current playback position as an indication of an interesting point in the video. The second way is implicit and is achieved by recording the viewer's playback behaviour (skipping, seeking, etc.).
I still have to figure out how to interpret these explicit and implicit likes though. For example, if someone clicks the like-button, it's quite likely that the user doesn't like the playback position t, but rather something like t-10s.
> Imagine getting hundreds of video submissions in a short timeframe... how do you get people to LikeLine them all?
If you get hundreds of video submissions in a short timeframe, then LikeLines will only be helpful if sufficient people also watch those videos. Then there is also the problem that these people have to watch these videos using a LikeLines player instead of a regular video player (like on YouTube for example).
(This is why I really hope the LikeLines idea catches on and all major video websites will start employing more intelligent video players, so that viewer communities will generate interesting metadata by just consuming video)