View Single Post
Old 16 August 2011, 18:44   #2
Vroom World Champion!

lifeschool's Avatar
Join Date: Oct 2009
Location: Lancashire, UK
Age: 40
Posts: 844
> Would the Next Gen. computer come without a mouse?

No... mice work fine for many purposes. Tablets for some uses, touchscreens for others.

> Would the Next Gen. computer have EMP (Electro-magnetic-pulse) protection? What happens if there's an EMP wave that destroys all the computers?

You won't have to worry about power anymore. And this will still destroy your computer, unless you intend to build it all out of RAD hardened components. No one will buy a $500,000 consumer appliance. If you want your media to survive the coming zombie apocalypse, or anything similar, do what I do... put it on optical backup, and store it in a good place.

Keep in mind a central theme at Commodore, Amiga, Metabox, and most places I've worked: high functionality products for a low price. This means only the set of features that will deliver the most benefits to the most people, as a built-in. And an easy way to add on extra stuff, as much as possible.

> Would the Next Gen. computer be kitted out with Virtual Reality? Are we to look forward to walking around with VR headsets and VR gloves on, and talking to our computer with voice commands?

Nope... no one wants a glove. Just in general, I hate 'em .. even winter gloves. I don't want anything covering my hands unless absolutely necessary. And no audio interface will ever work well, apart from perhaps speech interfaces on small personal devices.

At Fortele, we had a speech interface on the remote control... you had a speaker and mic. The control could be used as a phone or intercom, but also for voice macros to the system. But it also had buttons. Saying "NBC" into my remote to get to NBC would be cool... having to say "up-up-left-select" to operate the UI would be an epic fail. The touch UI on Android is very nicely augmented with voice input, for search, etc. Works just dandy... I can make a phone call faster with voice than touch.

But this only works because it's a closed system -- in both cases, I'm pushing a button to capture the command, and I'm talking privately.... so the system only has to hear my voice commands. An open mic on a desktop or monitor will have sort between my commands, the sounds on the television, the guy in the office next to me, etc. This is precisely why every audio input idea designed for general purpose use has failed.

Of course, if you want a voice interface and a glove, these are technically pretty easy to add to existing systems already. As mentioned, it's already built in to Android, and there's at least some audio UI built in on Windows. A glove would just be another USB controller. I've seen a few used for immersive VR applications -- the only practical way, so far, to link my real hand to my avatar hand in such a system. If we're operating our desktops in immersive stereoscopic video at some point, maybe I'll retreat on the glove rejection a little bit. Of course, we're already seeing, via "Minority Report" (the 2002 film that got pretty much everyone, Apple, Microsoft, Google, etc. working on touch screen interfaces) and via MS's latest gaming stuff (though this goes all the way back to "Mandella" on the Amiga in 1986 or so), that computers are going to get really good at reading hand gestures without the need for anything as annoying as a glove.

> Will Next Gen. video screens shrink to the size of I-glasses and other personal headsets, or will they grow and expand to the size of IMAX resolution screens? What about 3D?

[I-glasses and headsets] are very low resolution compared to screens. My 24" monitors are actually larger in my visual field than the typical IMAX screen... it's all based on seat position vs. screen size. The minimum for an IMAX movie is less than you might think. IMAX film has an incredible resolution... it's shot on 70mm stock, but the 60-something-mm edge is the short edge (eg, the film runs horizontally through the projector.

But for digital, the basic IMAX 3D film uses two 2K projectors, one for each eye. A 2K projector is nominally 2000x1000 pixels... roughly the same as HDTV. Some theaters (IMAX or not) are using 4K projectors... that's nominally 4000x2000 pixels. That's the absolutely state of the art in theaters these days... though NKK in Japan is actually working on an 8K system.

Again, the head-mount displays are useful in the fully immersive 3D context. You may not have much resolution per eye, but of course, with good motion tracking, you can have a very large effective screen. And that's what it would take to get me in one of those rigs. I think right now, we're not quite there computationally, and we're definitely not there OS-wise. But that could be a cool way forward. Particularly if some of the better micro-display technology is pushed, to deliver better resolution in such headgear.

> Will the Next Gen. computer allow me to interface my exercise bike and rowing machine so that I can exercise in full VR while competing online?

Of course, once you're doing all that, you're getting to some very hard core gaming add-ons, not something the average computer user is going to care about.

> They say todays LCD TVs and monitors aren’t built to last into the future – what’s your take on this?

No.. modern LCD screens are exceptionally stable, and will last very, very long, LCDs really don't age in any significant way, at least on a human scale... other than the fact that electronics themselves may eventually corrode and fail due to environment (but being fairly cool, an LCD will easily outlast your computer). The CCLF backlights have an MTBF of 50,000-100,000 hours.. that's awfully close to "forever", given that you're more likely to replace the screen for other reasons (I can get a better screen for $150) than due to failure. And LED backlights might well last past 1,000,000 hours.

And they're cheap... a 20-22" 1080p monitor can be had these days for under $150.

> The concept of OLEDs (Organic Light Emitting Diodes) is said to offer a broader spectrum of light over LEDs. So which is the more ‘future-proof’, LED or OLED?

I love the idea of OLED. There are still some technical issues, relative to screen life and larger displays, but I think it's one of the best possible large-scale replacements for LCD.

But why replace LCD? The native contrast on an LCD panel is usually 1:500 to 1:3000... some OLED displays do 1:1,000,000 or better. So the LCD panels today are using an array of modulated LED backlights to extend the contrast. As the LEDs get smaller... well, you're kind of moving to an OLED display anyway

Going further ahead, there are a bunch of companies working on printable OLEDs. This would start to deliver things like wrap-around displays, disposable displays, super high quality, low cost large screen TVs, etc.

> So are we going to be able to bend, fold, and shape our mobile devices to whatever design we prefer? - transforming our phone from a wristwatch/bracelet style to a touch-screen, and then folding it up again to slip it into our pocket – like the Nokia Morph concept?

Of course the Nokia Morph is just a concept -- they can't build it yet. I think there's some room for this, but no evidence so far that it's not just a gimmick, at least this way.

On the other hand, there's going to be an increasing effort to make your digital devices always-at-hand. It's taken a good 30+ years of trying to deliver a good pocket computer, and we still don't quite have a viable Dick Tracey "2-Way Wrist TV"... but eventually, someone will have the iPhone or Droid of wrist-mounted smartphones... lots of problems to make one of these useful: power, screen size, etc.

I've commented on my take about the leading things in the personal computer industry. The 1970s created the basic form of the personal computer, but still largely a hobbyist thing. The 1980s gave us the modern personal computer: multitasking, GUI, a something a non-expert would use. The 1990s saw the personal computer largely catch up with human needs -- gamers, multimedia authors, scientists would still push the limit on computation (in fact, as of the mid-1990s, PC gaming was the primary motivating factor on nearly every PC innovation).

So by the 2000s... it was basically all about case design. The PC was done. Sure, new OSs, new chips, etc. But the same kind of things, just faster. No major breakthroughs in computing.

That leaves us with the 2010s... the post-PC decade. Those who weren't playing close attention didn't realize that a circa 2000 PC was fast enough for nearly every user. So they got surprised when much slower devices, like Netbooks and ARM Tablets and smartphones, magically started also being fast enough for the average user.

This has the uncomfortable requirement that case design is still a primary motivating factor in unit sales. Apple understood this earlier than most PC companies... in the latter days of the PowerPC, they were still getting 2x-3x the price for their machines, versus much faster bog standard PCs. But the casework -- superb.

> Would the Next Gen. computer offer further Cloud Computing functionality? Is Cloud Computing something you would advocate?

It has its place.

I think the basic Internet is working out pretty well, and the fact that Internet applications are doing more server-end computing is fine. There are some things, like web search, that only work as "cloud" applications.

It's also the ideal back-end for mobile services and sync. A mobile device, like a smart-phone or a tablet, could very well become the only computing device some people need. But historically, companies like Apple, Microsoft, and Palm treated these essentially as PC peripherals. There was a time when you could do virtually nothing stand-alone: no app or music purchases, etc. If you wanted a backup, even on a smart-phone, you'd have to sync to your PC.

Android was instrumental in showing how this could be different. Google, of course, had no attachment to the PC world. It's not universal, but to a large extent, things just work right on Android. If I buy an application via my PC or other web device, it'll just show up on my tablet or phone.. but I can buy on the tab or phone just as easily. If my smart phone is lost or stolen, I get much of the old phone's environment just as soon as I authorize my phone. Note I take on these devices automatically and instantly sync to my other devices.

On the other hand, I don't believe cloud computing will replace local computing for most applications. Some corporations are drooling at the prospect, but they never really dealt with the move from mainframe to PC, or the power that gives the PC user. Cloud-based computing will give these people the central authority they like... only problem is how much of that control they have to concede to a cloud provider, rather than run themselves. I see some advantage for businesses doing number crunching, to be able to buy computing power as-needed, rather than buy for the peak and let all that hardware sit around at other time. But curiously, that's been part of the reason for cloud computing in the first place. For example, Amazon has to have the servers to meet their Christmas season order demands, or they're doomed. So at other times of the year, they commoditize that extra capacity.

And of course, you're not always online. Or the data demands are too high... I don't see video editing as a useful cloud computing function, given the hours it would take me just to upload a video. But email -- never a big computational deal anyway, unless you were Microsoft (Outlook is more bloated and less functional than most emailers I've used). And there are some advantage to the web orientation. I didn't really believe this until I started using GMail quite a bit... it's really a good system.

> How about HTML 5? It’s been a long time coming. Is it any good?

HTML5 is certainly a good thing. The Internet, like any other piece of technology, needs to evolve. HTML5 solves some (not all) of the issues we've turned to proprietary solutions previously, like Adobe Flash. And it's pretty clear this is well within the capability of most PCs, or for that matter, most smart-phones today. So why not?

This is sort of the idea of evolution that we've had all long in the PC industry. PCs have their slots and ports, web browsers have their add-ons and plug-ins. Once a function is useful enough that pretty much everyone needs it, it ought to be in the web browser, and it if makes sense, in HTML itself.

When I was at Metabox building my set top box, we had a version of the Voyager web browser with some extra HTML tags. With a couple of lines of HTML, I could create a video window of any size, and launch a video from any hardware player in the system (DVD, DVB, internet MPEG/MPEG-2). Little things like this allowed the STB to basically live "in" the web browser all the time.. the main UI and all that... just web pages. Very easy to build, seamless transition from what's on the device to what's on the web, etc.

> Looking at the very distant future now… How do you see Nano technology evolving? Are we ultimately heading towards computers interfacing with the Human brain?

These are two separate issues. I'll take the latter one first. If I set you up in a room, alone, and give you a general knowledge quiz... some questions from "Jeopardy" or something like that, you might do ok. If you're in there with a computer and access to any reasonable modern search engine, you'll do substantially better. In this way, the computer, just in this one context, functions as a very crude brain amplifier... it's not thinking for you, but it's allowing you to access information far more rapidly and accurately.

I think there is absolutely no chance that, eventually, we'll have computers integrated into the brain in such a way that they function directly as brain amplifiers... I think of a question, I get answers, pretty much as if I had thought of them. I work on a problem, and my ability to visualize, in my mind, will feel much like it does now, but things will be persistent, like on a computer... that image I picture, that schematic, that short film storyboard, etc... will still be there, in my mind, days, weeks, or months later. But the function will be far more organic -- a more sophisticated kind of brain amplifier.

This is normal evolution... every creature that makes artifacts evolves with those artifacts. And we make the best ones. Its affected us profoundly already... humans used to live to an average age of 35, then 45, etc.. and now it's up there in the 70s or 80s, depending on where you live. And that, too is just a primitive thing -- better knowledge of staying healthy, better treatments for fatal diseases, better surgery, etc.

One use for a nano-machine might be pushing this limit to whole new levels. The classic idea of nano-technology is a machine of some kind, possibly a computer of some sort, that's built on a nanometer scale. Such devices are so small, they might last indefinitely. Some models have them self-replicating, others no so much. But either way... imagine a sea of nanobots in your bloodstream, all networked, working to keep you healthy. If they see high sugars or cholesterol levels in the blood, they physically remove those compounds. If they find fat deposits in the arteries and veins, they destroy them. Same with rouge cells (eg, cancers). Just doing this, in fact, you'd wipe out several of the causes of death in western societies: no more heart disease, no more cancer, no more diabetes, etc. And they'd probably fight off infections, so most standard disease is gone too. And that's before we learn to make them rebuild shortened telomeres, or anything really clever like that.

> Finally, when will I get my Holodeck?

I'm still waiting for my flying car!


-- Many thanks to the wonderful Dave Haynie for taking the considerable time and effort to reply to all those questions! Thank you for a terrific interview!! --

Also mirrored on Lemon, see this thread.

Last edited by lifeschool; 16 August 2011 at 18:52.
lifeschool is online now  
Page generated in 0.10756 seconds with 9 queries