PDA

View Full Version : amazing originality from microsoft ?



Dreadful Scathe
12th-June-2007, 11:04 AM
Wow. Just watch this short video (http://www.ted.com/index.php/talks/view/id/129) - there are 2 things in it: an amazing wall of images of any resolution and infinite detail, really cool and very bladerunner. The 2nd thing - spatial hyperlinks - is beyond cool and into "frikkin sharks with laser beams on their heads" territory. :)

you really need sound but may get the gist without it.

straycat
12th-June-2007, 11:15 AM
Wow. Just watch this short video (http://www.ted.com/index.php/talks/view/id/129) - there are 2 things in it: an amazing wall of images of any resolution and infinite detail, really cool and very bladerunner. The 2nd thing - spatial hyperlinks - is beyond cool and into "frikkin sharks with laser beams on their heads" territory. :)

Wow. As you say - beyond cool (and this comes from an M$ hater)

Wonder what that would look like if they combine it with Jeff Han's work (http://www.ted.com/index.php/talks/view/id/65)...

Dreadful Scathe
12th-June-2007, 11:29 AM
Wow. As you say - beyond cool (and this comes from an M$ hater)

Wonder what that would look like if they combine it with Jeff Han's work (http://www.ted.com/index.php/talks/view/id/65)...
indeed - even ST:TNG touch control panels didnt demonstrate mutli-touch in that innovative way...cool

the link to the demo of the spatial thing is here (http://labs.live.com/photosynth/view.html?collection=sanmarco/index1.sxs)

ducasi
12th-June-2007, 12:09 PM
Haven't seen the video yet, but re: Jeff Han, multi-touch, et al, it's worth reading these two articles from The Register...

Why Microsoft's innovation is only Surface deep | The Register (http://www.theregister.co.uk/2007/06/01/fentem_microsoft_surface/)

Surface computers: debunking Microsoft and Han | The Register (http://www.theregister.co.uk/2007/06/08/surface_computing_mailbag/)

ducasi
12th-June-2007, 12:31 PM
OK, seen the video now... The zooming stuff looks neat – I wonder how much disk and memory you need to do that, never mind the spec of the graphics card...

The second part with Notre Dame... Cool, but besides touristy places like that, where are you going to get enough photos of anywhere to make that actually useful?

Lee Bartholomew
12th-June-2007, 12:35 PM
OK, seen the video now... The zooming stuff looks neat – I wonder how much disk and memory you need to do that, never mind the spec of the graphics card...

The second part with Notre Dame... Cool, but besides touristy places like that, where are you going to get enough photos of anywhere to make that actually useful?

Jenna Jameson springs to mind. :whistle:

Dreadful Scathe
12th-June-2007, 12:42 PM
OK, seen the video now... The zooming stuff looks neat – I wonder how much disk and memory you need to do that, never mind the spec of the graphics card...

The second part with Notre Dame... Cool, but besides touristy places like that, where are you going to get enough photos of anywhere to make that actually useful?
you didnt pay attention ;) - the whole point of this, and what makes it cool, is that it uses pictures already freely available - he quoted flickr as an example. So you use a picture as a starting point and it uses as many as it can find from everyone else who has been there to build the spatial representation.

go and see the actual demo at the link i posted.

straycat
12th-June-2007, 01:51 PM
Haven't seen the video yet, but re: Jeff Han, multi-touch, et al, it's worth reading these two articles from The Register...


If only to realise that the author is an idiot? :whistle:

So the 'fatal flaw' is that the demo systems are enormous??? The iPhone uses similar ideas in its interface, and 'enormous' isn't really word that comes to mind when describing it (unless you're talking about potential)

All the examples are used in the dark: if you're demoing something where the whole emphasis is on the projector screen, and you want it to look really good, sounds like a good move. There's nothing to say it won't work in lit rooms... Most of the main arguments agains here are ... silly, imho.

bigdjiver
12th-June-2007, 01:54 PM
Wow. Just watch this short video (http://www.ted.com/index.php/talks/view/id/129) - there are 2 things in it: an amazing wall of images of any resolution and infinite detail, really cool and very bladerunner. The 2nd thing - spatial hyperlinks - is beyond cool and into "frikkin sharks with laser beams on their heads" territory. :)

you really need sound but may get the gist without it.Thanks for the link to TED: Ideas worth spreading (http://www.ted.com) rep deserved, but, allegedly, has to be spread first.

ducasi
12th-June-2007, 01:59 PM
you didnt pay attention ;) - the whole point of this, and what makes it cool, is that it uses pictures already freely available - he quoted flickr as an example. So you use a picture as a starting point and it uses as many as it can find from everyone else who has been there to build the spatial representation.

go and see the actual demo at the link i posted.
"The Photosynth technology preview runs only on Windows XP SP2 and Windows Vista."

I agree it's cool – but is it useful?

Dreadful Scathe
12th-June-2007, 02:09 PM
its easy to install windows xp sp2 - even on a mac or linux - but yes, i see your point, if on release of this technology it isnt platform independent, i'll be just as annoyed as you.

David Franklin
12th-June-2007, 06:27 PM
you didnt pay attention ;) - the whole point of this, and what makes it cool, is that it uses pictures already freely available - he quoted flickr as an example. So you use a picture as a starting point and it uses as many as it can find from everyone else who has been there to build the spatial representation.Couple of things here.

Firstly, this is almost certainly an "asymmetrical" algorithm. In other words, I would guess it takes a very long time to do the inter-picture correlation. Once you've done that, the viewing side (i.e. what you see in the demo) is much much simpler. But Microsoft probably used a ton of CPU resources creating the demos, and I wouldn't be surprised if there was a lot of hand tweaking (and careful selection of "good" pictures) involved as well. Which means you won't be able to upload your pictures anytime soon. (Actually, I just checked the Photosynth website, and it appears I'm bang on the money here).

Secondly, I would be very very surprised if everything in that demo wasn't cached on the local hard drive of the machine. Two golden rules:


Any sufficiently advanced technology is indistinguishable from a rigged demo.


almost all programming can be viewed as an exercise in caching.

It still looks very very impressive, of course. But it's a long journey between a tech demo and a product.

bigdjiver
12th-June-2007, 07:20 PM
Couple of things here.

Firstly, this is almost certainly an "asymmetrical" algorithm. In other words, I would guess it takes a very long time to do the inter-picture correlation... Certainly used to be true, but I would not be surprised if things had not moved on. A search on "patent image recognition" brings up many novel technologies being rolled out. Google has bought into this market, and there are many others working on scanning the internet for copyright violations. If the copyright scanners coordinated their efforts with building a global image map the results could be amazing.

David Franklin
12th-June-2007, 07:37 PM
Certainly used to be true, but I would not be surprised if things had not moved on. A search on "patent image recognition" brings up many novel technologies being rolled out. Google has bought into this market, and there are many others working on scanning the internet for copyright violations. If the copyright scanners coordinated their efforts with building a global image map the results could be amazing.From the Photosynth Website: (http://labs.live.com/photosynth/FAQ.htm#UploadMyPhotos)


When will I be able to upload my own photos?

We want to provide this capability as soon as we can, but there are some real technical hurdles to solve before were ready for primetime. We're still learning what works and what doesn't with the recognition algorithms, improving them as we go. It's also very computationally intensive; the processing to build a collection can take hours or days at the moment.

Yes, things are moving quickly in these areas. But as computational power grows, the ambitions grow, and often new problems come up. For example, early algorithms might just use 4 "good" feature points and require manual intervention when something happened to one of them (e.g. obscured by another object). Now CPUs are fast enough to handle hundreds of feature points, which means losing a single point isn't such a problem. But it also makes manual selection/intervention of those points isn't terribly feasible, so the machine now has to decide which points to "trust", which to discard, etc. So, you gain something, but you immediately get a whole new set of problems.

I was responsible for the image tracking/stabilisation software on a film compositing product, once upon a time; we had just started looking at the "OK, now we can handle lots more than 4 points, what shall we do?" problem when the product was canned. [Trivia/Boast: In the film Contact, early on there's a long tracking shot where we first see the Arecibo telescope. The original footage had it looking rather grimy, and they wanted it to look a lot cleaner. We got them a beta version of the software, and they used it to produce a cleaned up version].

Dreadful Scathe
12th-June-2007, 10:24 PM
Couple of things here.

Firstly, this is almost certainly an "asymmetrical" algorithm. In other words, I would guess it takes a very long time to do the inter-picture correlation. Once you've done that, the viewing side (i.e. what you see in the demo) is much much simpler. But Microsoft probably used a ton of CPU resources creating the demos, and I wouldn't be surprised if there was a lot of hand tweaking (and careful selection of "good" pictures) involved as well. Which means you won't be able to upload your pictures anytime soon. (Actually, I just checked the Photosynth website, and it appears I'm bang on the money here).

I didn't expect otherwise - but the fact that the demo exists suggests that although it may not be an online resource (other than adding to the ones built in house) in the foreseeable future, it is possible some client software could be released sooner than that. But anyway, its still cool :)


Secondly, I would be very very surprised if everything in that demo wasn't cached on the local hard drive of the machine.

If you follow the link above you can try the demo yourself. It is extremely fast on my laptop using firefox - for all 4 test demo locations.



It still looks very very impressive, of course. But it's a long journey between a tech demo and a product.

Did you try the online demo yourself ? Seems promising to me. As you say , the actual building of the spatial model with random photographs may be a long way off, but if microsoft were to add this to , for example, a map program for various locations round the world in the short term - i'd still find it cool. (and far more impressive than googles low-res street level addition on google maps - zoom into Manhattan if you've not seen that)

The artists studio is the coolest: as you look round you can see various paintings on the walls , most of them are high res pictures that you can zoom into far enough to see what type of canvas and paint it is. :)

tell me thats not slightly cool ;)

David Franklin
12th-June-2007, 10:59 PM
If you follow the link above you can try the demo yourself. It is extremely fast on my laptop using firefox - for all 4 test demo locations.I couldn't get it to run on Firefox, but as you insisted I try it, I've run in on IE (spit!). So... it's about the speed I expected, and to me that is a fair bit slower than it looked in the demo. But on the other hand, when you're watching a demo clip that's been compressed to 320x240, you're not really going to be able to see the images loading in. So I guess I was just assuming the actual live demo was running HD res at real time, which I would say is beyond what is achievable over any realistic web connection.

(In other words, I probably imagined the demo to be a lot better than it actually is).

bigdjiver
12th-June-2007, 10:59 PM
It's also very computationally intensive; the processing to build a collection can take hours or days at the moment.OT:- I guess I had run into one of the common traps, of expecting new scientific miracles every day. Long ago I decided to try to solve some of the unsolved problems in math. I started with the travelling salesman problem. An algorithm from a computer magazine ran into days when the number of towns got into double figures. I found an algorithm which got an approximate answer quickly. Later I used my algorithm to make machines drill a printed circuit board 15% quicker than an expert human designed path. It could process a 2000 hole board in minutes.
I was expecting similar strides in graphic processing algorithms. Humans can match pictures very quickly, and do "spot the difference" puzzles for fun. From my armchair it looks easy.

Dreadful Scathe
19th-June-2007, 05:12 PM
an update - on this story here (http://www.wired.com/software/coolapps/news/2007/06/vr_conference)