View Single Post
Old 16 August 2011, 18:42   #1
lifeschool
Local Moderator
 
lifeschool's Avatar
 
Join Date: Oct 2009
Location: Lancashire, UK
Age: 48
Posts: 1,597
Happy INTERVIEW WITH DAVE HAYNIE - Insights On Future Technology!!

Greetings everyone,

You are welcome here today to bare witness to a brand new and exclusive interview with a hero of the Amiga scene - the legendary Dave Haynie. For those who don't know, Dave started on the Commodore 128, and continued through to the much heralded 'AAA' chipset- the successor to AGA. I took the opportunity to interview him.



> Hello Dave. I recently saw your 2005/2006 CommVEX/User Group videos - in which you are sadly underused as an interviewee (guess you noticed) on youtube. : Any thoughts on that?

Yeah... that was kind of weird. They actually paid for my trip out there, and then it was basically up to me to just hang around. I even broke out a guitar and played for them while they were taking the show down.


> So, before we gaze longingly into the future, let's begin in the past. Whatever happened to the idea of the Amiga Set-Top Box?

It's kind of happened. This goes way, way back to Commodore... of course We had brainstorming sessions back in the C128 days, and I had come up with the idea of a "living-room computer"... something that would integrate into your component stereo system. This was, of course, before video was a typical component of such systems, but at the time, I had one of the first televisions designed to do just that (from Panasonic's Technics brand).

So the idea for the STB I had in the late 1990s was effectively to put a personal computer in your A/V system, but just not call it a personal computer. Our early Metabox 500 was sort of a rough cut, but the never released Metabox 1000 was the real thing. This ran our own Amiga-like OS, CAOS (Carsten and Andy's Operating System), complete with MUI, Voyager, etc. The graphics chip I used did hardware picture-in-picture, so you could watch a DVD or DVB broadcast (and eventually, internet video) in a window or two without the need for hardware scaling. It even played MP3s


> So how do you decide which multimedia formats a system will use??

Back in the late 1990s, it wasn't entirely clear which media formats would dominate, and there was a break between consumer electronics and the internet. Today, you could cover nearly everything with the existing DVD/Blu-Ray formats; which you have to have anyway. Going a little deeper, since any STB system is going to be largely software driven, it's likely that important new formats, like maybe the WebM or Ogg stuff, is just "a simple matter of software".

Hardware overkill is why products often fail.. you don't need every interface built-in on a device, or it'll be too expensive for anyone to ever buy. No one who would buy such a product cares about video over RF anymore, other than maybe television reception (ATSC/DVB). No one's using Firewire anymore. LAN and USB, those are must-haves. SATA maybe internally, but there's no need for that to be external... USB 2.0 is fast enough for video, and these days, I'd go with USB 3.0 and just be done with it.

Any lesser interface out there probably had a bridge device already. For example, there are video capture devices that go from analog to USB, plenty of them. For the rare user who needs this, find one or two and ensure your STB works with them.

NO ONE is going to use such a device for professional video work... not even people like me who use their PCs for professional video work. I absolutely want my STB to be able to preview completed videos in many ways (eg, from memory card, over the net, etc) but I have no use editing from them. And if I were starting on this today, I'd be entirely happy with memory card (perhaps even just SD/SDHC/SDXC) and USB for video and photo capture, since that's the overwhelming standard today in consumer and even many professional products.


> What about video standards and other output 'compatability' issues?

SVGA is already a dinosaur. New systems offer only DVI, HDMI, DisplayPort, maybe Thunderbolt if it actually catches on beyond Apple. A full DVI port includes analog outputs, but that's kind of going away. It's already the case that analog inputs on a computer monitor are starting to vanish (ok, my two 1200p monitors actually have CVBS, Y/C, YPrPb, VGA, and HDMI inputs... but it'll be a challenge to find a monitor with any analog input other than VGA, and many are leaving that off).

You have to be careful about getting too crazy. If I charge every user for the 0.001% of people who want to hook up a turntable, I have lost already. On the other hand, if I build in software that works with, say, all Numark USB turntables and the cheap Behringer turntable to audio USB interface (well, hey, I have both of these on my desk right now), then I've handled the problem well enough to offer a practical solution, for perhaps a day's work on software drivers (if that).


> Was it difficult for the Metabox to handle very good quality streaming video in those days using the existing hardware?

Not so much. Hardware has been growing faster than networking. Back when Metabox USA was in talks with Blockbuster and Enron (in 2000) to deliver the STB piece for their vision of "a Blockbuster store in every home", we were talking about 1Mb/s video, with lots of local buffering (eg, you needed a hard drive) to deliver more-or-less-DVD quality video in MPEG-4 ASP, not even AVC.

Today, of course, DVD quality isn't difficult, but no one's delivering streamed Blu-ray quality yet. Of course, with the Apple-fication of media, there are lots of people who'll take something of lesser quality, now, versus having to wait for a day or go to a store to see it better.


> With TV station viewing figures continually on the decline, by 2002 you had the original idea of bringing the power of network streaming technology to the home user. Tell us about Fortele…

Springsteen wrote a song about "57 Channels (and nothing on)".. and sure, you can extend that to 500 channels on some days today. After Metabox, Andy [Finkel] and I had a startup called Fortele. Our goal was whole home media networking, where the big thing was the network. You'd hook devices into the network and they'd work like PC peripherals -- entirely subsumed by the network interface. Every device would have the same UI via our interactive system. You could watch or record any source of video on any video player in any room in the house, even take it with you (with enough warning, anyway).

DRM was probably the largest issue to deal with there... the DVD and satellite/cable providers didn't love the idea of ripping their resources and spreading them around the house. This ironically made this idea easier to eventually build with free tools on your own (all those folks supporting some kind of actual PC for the livingroom) then to commercialize.

We never got that far... it was bad business people that ultimately killed that project (bad at getting money.. it sometimes seems like you either get honest money people who are no good at their job, or bad-asses who bring in the cash, but likely as not to stab you in the back if they've got nothing better to do)



> That brings us up-to-today. In your view, which is better - Mac or PC? (or emulate both?)

Macs are PCs.. it's all just about the OS these days. And few people are worried about both -- why does some OS I'm not using have anything important to say on how I write a program for the OS I am using?


> Why, despite 64 bit speed-stepping six-core CPU’s, are modern computers still painfully slow to boot and operate?

There's two things at work here. They really are very fast. But that speed is always being used. Look at the work an Amiga had to do to boot... most of the code was already resident, it booted, not instantly, but in seconds, because it just didn't do that much before starting. Today's PCs have hard drives 5x faster than the fastest Amiga RAM, but they're doing 1000's of extra things before they boot. This could be changed, but there's no great incentive to do so, because you just don't reboot that often.

If you look at either Windows or Linux, they're doing lots of things at boot time, loading all kinds of services... there's a reason why an average week's updates for either are many, many times larger than the whole AmigaOS. To speed things up here, they could do what we did in later versions of AmigaOS -- launch the critical components immediately, then spawn off a background process to complete the other stuff.

That would work, but it would also cause problem in current-gen OSs. For one, applications aren't written to launch with a system not fully formed. If you took some of the idea of an OS like Android, in which every application has a resource manifest, and applied that to system resources as well, then this would all kind of "just work".... apps would launch, but know to sleep until the things they needed were fully available.

As for slowdowns in interactivity, that's largely due to poor system design. Keep in mind, for example, that Windows was designed for single-tasking systems. There are all kinds of things in Windows APIs that are designed to serialize things that should happen in parallel. In other OSs (used to be true of MacOS, not sure now), only one of N CPU cores can be the I/O processor -- only that one can handle hardware interrupts.

In short.. if you have slowdowns on the desktop, it's the software. And some of that's buried deep in the OS architecture... if we were all running AmigaOS or BeOS or something like it today, you wouldn't see much of it (though actually, there are some problems that have been exposed in the AmigaOS model that really start to drag a system down, too.. this is why software needs to evolve to match the hardware).

PCs evolve as fast as they can. And honestly, unless you're doing games or video or a few other very intensive tasks (I needed a memory upgrade to 16GB to really be happy with some photo editing I'm doing lately...but when you're merging 30-something 18Mpixel photos in 48-bit color, this happens), PCs have been plenty fast for the average user for quite some time.


> Would it not be faster just to combine the CPU and Ram (or HDD) into one unit?

That's not even remotely a near-term solution. But of course, systems have been evolving to manage delay, which is the real solution. As transistors get cheaper, you have more on-chip cache, more processors, etc. to ensure that, more of the time, you're not waiting on some hardware resource... or your wait is pushed aside to let other things do their work.


> So, perhaps it’s not the speed of the hardware but the speed of the OS which matters most?

You could come up with all kinds of sophisticated ways to speed an OS up, and they might do great -- if you already had an efficient OS. But the problems these days are more large scale. I mean, look at Windows design.

Like Intuition interfaces on the Amiga, every Windows application has a mandatory (unless it's a shell program) Windows message port. Windows uses this for damn near everything.. even signals sent asynchronously from one thread to another can, if you're not careful, get turned into messages on the message port. The message queue, too, is a serial thing.. you have to clear one message to get to the next one.

Now, in Intuition, we have some pretty good power.. an application registered with Intuition, but only for the messages it needed to see. Others were handled entirely by Intiution, and the message queue you did get was just for your application. Intuition could do a bunch of things, like handling window resizing, gadgets, etc.

Windows isn't that sophisticated. For one, every window in Windows is what Amiga calls a SIMPLE_REFRESH window... when something changes in the window manager, you get a message from the OS that says "redraw your stuff". Nothing else... no OS management of the window contents. This is why, when a Windows system gets busy, Windows don't erase, much less redraw, and even the gadgets don't get refreshed.

Now consider that, until XP or Vista (don't recall offhand), there was one single global message queue -- another aspect of back when Windows was single-tasking. So application A gets a redraw message. A well behaved application A would acknowledge that message, grab any others it might have, then signal a thread (without using the message queue again... but this isn't always an easy thing to do) to go and redraw the Window. Of course, a poorly designed application will use just the one thread, and hold up the whole message queue -- every other application on-screen -- while it does its work.

And Windows isn't the only culprit here... all current OSs have these kinds of issues. Even AmigaOS can get really slow over issues like layer locking (there's a video up on YouTube which shows MorphOS, essentially cloning the AmigaOS, failing against MacOS on opening multiple web browser instances, due to some of this stuff). There is a ton of room for optimization.

One problem is the very fact that OSs are not a big deal anymore. Most of the time, they do what they need to do, and otherwise get out of the way. And the actual value of a commercial OS as a product is close to zero... that's one big thing that did Be, Inc. in. So while there have been advances in OS technology, no one's really looking at efficiency issues, particularly on the desktop. After all, you know very well that next year's computer will be twice as fast (which actually doesn't help against these sort of software state-machine inefficiencies and deadlocks one bit. I also expect that the few people on the earth who understand these issues well are not working for OS companies).


> Some people think the OS should be the emblem and selling point of the machine - others consider the OS as nothing more than a background systems launch-pad; to launch applications as quickly as possible. I prefer the second option…

That's pretty much the role of OSs today... no one's really selling a machine on the OS other than Apple these days. And there are good historical reasons why.

Back in the 8-bit days, when you bought a computer, the OS would be something you used directly and every day, as a programmer. In the early days, no one bought a computer who didn't write at least some code for it. So you had to be concerned about the machine, you wrote directly to the hardware, etc... and you could actually learn the whole thing. In short, your hobby (which was 90%+ the point of personal computers in those days) was the computer itself. This is why so many people still mess around with the C64.

As computers evolved, new people started using them... and they didn't care much about the OS, they just needed to know their applications would run. That's even more today, with the additional fact that many of your applications can be web-based, so in fact the OS doesn't even matter sometimes for the old application-lock-in advantage (eg, the thing that's kept Windows alive all these years). I can surf the net, watch videos, etc. as well on my Android tablet as my desktop PC.. but the tablet moves with me, and even works well outside.

All of these things are making the OS less important. It's still an issue for application compatibility -- I don't care so much about running Windows, but I want Sony Vegas and Cakewalk Sonar and Altium Designer to run on my desktop.... for that kind of work.



> Looking more towards the future… Whatever happened to the promise of the Next Generation computer? We moved from 8 bit to 16, 16 to 32, 32 to 64. So when’s the Light-Speed computer coming out?

The next-generation computer is largely a myth... because of several factors. One is simply that computers today are very, very complex. Thus, they're broken down into modular components... exactly the kind of thing I was trying to do in the latter days of Commodore. One reason Amigas fell behind on hardware was that the whole system had to be re-invented for each new machine. The PC succeeded in splitting things up nicely. And many of these components, like system chips, CPUs, GPUs, etc. have become such a big deal, one company can only really do one or two of these well. And they're not starting fresh, either, but standing on the shoulders of last year's work.


> Would the Next Gen. computer have super fast ram; which would store data like Flash ram?

There is actually technology moving in that direction... magnetic RAM. This is known as MRAM or FeRAM (two different approaches), and this promises to essentially replace both flash (because its non-volatile, unlike RAM) and DRAM (because it's very fast, unlike flash) with a single memory type. This has been in development for probably two decades, but it's starting to show up in places. Texas Instruments, for example, has a version of the MSP430 microcontroller (the chip I used in the Nomadio Sensor and receivers) with MRAM replacing flash and RAM.

Basically, you could turn your computer off, and when you turn it back on, it would resume pretty much right where it left off. The one issue, of course, is that not every device in the system is non-volatile. So drivers still have to understand the power-down state, and re-initialize the hardware they manage.


> Would the Next Gen. computer have 1TB internal flash ram/rom?

1TB flash ROM today would cost thousands... there is absolutely no need for that much flash on a consumer device. Sony got it right with the PS3... ship a fairly useful sized HDD, but make it super easy to replace it with a larger one. Thus, my PS3 has a 320GB HDD... I can get something close to 1TB in the 2.5" form factor today, or even an SSD if I really want that (it offers no practical advantage other than running a little quieter... there's nothing on such a device that approaches the performance of an HDD).


> Would the Next Gen. computer be able to offer video editing on-the-fly?

Video editing on the fly is not something the average home user needs. We had plans to offer simple home video editing in the Metabox 1000...you'd capture your home video from camcorder, and have some facility for cut and conversion. Nothing too sophisticated, because a practical device for the livingroom just doesn't have the kind of power necessary. And there's very little practical value in mixing live video anyway... unless you're running a television studio, you don't need it. That's not how video editing works.

With that said, the PS3 does offer a free video editing app via download. And it's a bit better than what we were thinking of back in 2000. Of course, the PS3 is many, many times more powerful.

If you haven't noticed, most game consoles do this job very well today. I have a PS3 in my TV room... it's my DVD/BD player. If I lived in an area with reasonable internet services, I could watch all kinds of streaming video: internet, Netflix, etc. I can drop a video in from an SD card (ok, they eliminated the memory cards in cost-reduced models.. the fact mine has a card slot is no accident), stream it from my PC, etc.

In fact, I edit video on my PC all the time, non-realtime-editing of course, and with a 6 core AMD 1090T at 3.2GHz, it's not fast enough. Yet, for modern video editing, switching tracks/sources is virtually free. The overhead is in decoding and re-encoding (if you need to re-encode) the video, and compositing with effects. That's why folks like me have fairly good CPUs ... don't need that for electronics CAD anymore, but you need all the performance you can get for video.

Last edited by lifeschool; 07 September 2011 at 23:45.
lifeschool is offline  
 
Page generated in 0.07408 seconds with 11 queries