Ritzi Lee
24-06-2006, 10:59 AM
CPU's are getting faster and faster.. We have more knowledge of how to emulate analog switching in plugins. Theoretically (the acid freakz would slag me down) it's possible to emulate a TB303, and modulate it completely in software (having in account things like cross modulation) to have a perfect emulation. But it would cost an enormous amount of CPU power.. But because of the fact CPU's are getting faster, wouldn't it be possible that in say 2010 the hearable difference between software and hardware will be zero to non?
Or should we go another road and concentrate on a new kind of synth? How would you see the evolution of audio software?
Would it be possible in the future to have software that can read your mind with a couple of sensors on your head reading the sounds you are thinking about? I know from a show on Discovery Science they are now experimenting with catching brainwave impulses and how it is defined in the digital domain.
There is also a new revolution going on in the ICT scene with developing software systems. Seems like people are now concentrating on Model driven architecture (MDA) and Service oriented architecture (SOA)... To explain it simple: Different kinds of communication services and engines are linked up together in one network to communicate with each other. Let's take a webserver, an email server, a couple of databases, connected to one Datawharehouse system, connected with each other in one harmony, so that different users with different application purposes can work together in a fast effective way...
I would see these kind of technologies to be applied in the music production scene. It would make the reachability between artists all over the world closer. One example: If you want to make an album with someone from the USA, India, a couple of people from Europe and Japan, you can deploy usergroups and workspaces, and all compatible with each others studios....
To get it even a step further, it would be possible when finishing a project, to release it with one click to all the official distribution channels, because the whole music industry is connected on one big SOA.
Or should we go another road and concentrate on a new kind of synth? How would you see the evolution of audio software?
Would it be possible in the future to have software that can read your mind with a couple of sensors on your head reading the sounds you are thinking about? I know from a show on Discovery Science they are now experimenting with catching brainwave impulses and how it is defined in the digital domain.
There is also a new revolution going on in the ICT scene with developing software systems. Seems like people are now concentrating on Model driven architecture (MDA) and Service oriented architecture (SOA)... To explain it simple: Different kinds of communication services and engines are linked up together in one network to communicate with each other. Let's take a webserver, an email server, a couple of databases, connected to one Datawharehouse system, connected with each other in one harmony, so that different users with different application purposes can work together in a fast effective way...
I would see these kind of technologies to be applied in the music production scene. It would make the reachability between artists all over the world closer. One example: If you want to make an album with someone from the USA, India, a couple of people from Europe and Japan, you can deploy usergroups and workspaces, and all compatible with each others studios....
To get it even a step further, it would be possible when finishing a project, to release it with one click to all the official distribution channels, because the whole music industry is connected on one big SOA.