Bahamut Posted December 28, 2005 Share Posted December 28, 2005 Ok, I have 2 tracks, one at 140 bpm, the other at 149 and I want to mix them. So I change both their tempos to about halfway at 144 bpm while keeping the pitch intact (I want that because they are in harmony at some point)... but now both tracks sound fucked up. So I guess this method is only useful for changing tempo with 1 or 2 BPM at max? It seems 4 is a bit too much. I use Mixmeister btw. Is there better software for timestretching or is it simply impossible to get this right? Quote Link to comment Share on other sites More sharing options...
Digital Psyence Posted December 28, 2005 Share Posted December 28, 2005 In traktor Dj studio, u can easily change the bpm 4-5 ticks without any big change in the pitch......dunno bout mixmeister Quote Link to comment Share on other sites More sharing options...
Amygdala Posted December 28, 2005 Share Posted December 28, 2005 Yup, it all comes down to the timestretch algorithm... Mathematical/theoretically, it should be possible to stretch 10-15 % without any audible sound quality-loss. I don't know any particular programs, but try to find a stretcher that uses the phase-vocoding principle. -A Quote Link to comment Share on other sites More sharing options...
Philter Posted December 28, 2005 Share Posted December 28, 2005 try ableton live, it will in most cases sound grainy though.....depending on the tracks.... Quote Link to comment Share on other sites More sharing options...
Bahamut Posted December 28, 2005 Author Share Posted December 28, 2005 Thanks for your input. I've also tried Kontakt but the results are about the same. I guess it also depends on what you define as 'audible loss of quality', because I can hear any change above 1 BPM or so... especially when the tempo goes up. Quote Link to comment Share on other sites More sharing options...
Amygdala Posted December 29, 2005 Share Posted December 29, 2005 I guess it also depends on what you define as 'audible loss of quality' 405104[/snapback] I define it as "no way can one ever tell the difference" - I know it's possible, I've heard music slow down and stop completely, without quality loss... It was classical music though, but still. The timestretcher was written by one of my uni-professors, and it was veeeery good. I think I actually have the source-code somewhere, but it's a couple of years ago... I'll let you know if I find it -A Quote Link to comment Share on other sites More sharing options...
Colin OOOD Posted December 30, 2005 Share Posted December 30, 2005 Current opinion (from people whose opinions I trust) is that with complex material, Ableton Live is about as good as it gets. Try the new Complex warp mode in v5. Quote Link to comment Share on other sites More sharing options...
slyman604 Posted December 30, 2005 Share Posted December 30, 2005 SX will also do this with its time stretch process, as long as you know the BPM of the source material its easy, then just set the bpm you wish to have as the output. Quote Link to comment Share on other sites More sharing options...
Rowe Posted February 8, 2006 Share Posted February 8, 2006 Acid Pro 4 is pretty good for that i belive. Quote Link to comment Share on other sites More sharing options...
niobium Posted February 8, 2006 Share Posted February 8, 2006 Acid Pro 4 is pretty good for that i belive. 435132[/snapback] These algorithms must be very, very sophisticated. I can't fucking imagine where to start. Quote Link to comment Share on other sites More sharing options...
Amygdala Posted February 8, 2006 Share Posted February 8, 2006 These algorithms must be very, very sophisticated. I can't fucking imagine where to start. 435158[/snapback] THIS is a good place to start -A Quote Link to comment Share on other sites More sharing options...
David Posted February 8, 2006 Share Posted February 8, 2006 Cubase, and Wavelab, uses an algorithm called MPEX and it's really good for time stretching, compressing and transposing audio material. Good luck! Quote Link to comment Share on other sites More sharing options...
niobium Posted February 8, 2006 Share Posted February 8, 2006 THIS is a good place to start -A 435211[/snapback] thank you. I wish I could afford that book. edit: cunts give you a five page taste. Quote Link to comment Share on other sites More sharing options...
electric blue Posted February 8, 2006 Share Posted February 8, 2006 if you have some kind of algorithm for the purpose, you can also use matlab for precise results, but thats quite ticky. Quote Link to comment Share on other sites More sharing options...
Amygdala Posted February 9, 2006 Share Posted February 9, 2006 thank you. I wish I could afford that book. edit: cunts give you a five page taste. 435227[/snapback] Goodness gracious, yes - that is expensive. I remember it as half that price or so - If you look through the table of contents, you can find names of algorhithms, and maybe google them - they should be out there -A Quote Link to comment Share on other sites More sharing options...
niobium Posted February 9, 2006 Share Posted February 9, 2006 if you have some kind of algorithm for the purpose, you can also use matlab for precise results, but thats quite ticky. 435255[/snapback] Yeah, the book refers to Matlab. I am mostly interested in the theory behind it. Quote Link to comment Share on other sites More sharing options...
electric blue Posted February 9, 2006 Share Posted February 9, 2006 Yeah, the book refers to Matlab. I am mostly interested in the theory behind it. 435862[/snapback] i wish i could help you with the theory but its been years since i last touched matlab, or some production tool. here are some waypoints: with matlab, you can open wav files as x,2 matrices (each column for each speaker and each line for the sample taken each hertz) try to find an algorithm to process this type of matrix, or you'll first have to break the main wav into its frequency groups (ie, high frequency sounds seperated) and thats tricky. and for info on signal processing, you'll need an engineering background, probably. Quote Link to comment Share on other sites More sharing options...
Amygdala Posted February 9, 2006 Share Posted February 9, 2006 Yeah, the book refers to Matlab. I am mostly interested in the theory behind it. 435862[/snapback] It's been three years, since I read DAFX - but I'm pretty sure MatLab is sort of implied in it - and used as a tool for teaching the algorhithms. The theory of DSP is the main focus, and exemplified in MatLab. -A Quote Link to comment Share on other sites More sharing options...
niobium Posted February 17, 2006 Share Posted February 17, 2006 i wish i could help you with the theory but its been years since i last touched matlab, or some production tool. here are some waypoints: with matlab, you can open wav files as x,2 matrices (each column for each speaker and each line for the sample taken each hertz) try to find an algorithm to process this type of matrix, or you'll first have to break the main wav into its frequency groups (ie, high frequency sounds seperated) and thats tricky. and for info on signal processing, you'll need an engineering background, probably. 436051[/snapback] Well, that's a start. But, it is a given more or less that each channel has a Zx1 vector I assume right? Each component in the channel vector being a 'bit' (0 or 1) of course. Begin conjecturing ramble: Next I guess we need to interpolate the meaningless bits into an adequately precise analog fourier series and then find the closest digital fit to that series. This is an elaboration on what you mentioned, as is: now we take the digitally measured frequencies (from the original gear of reasonable facsimile) and find the fourier series associated with the rhythm components of the composition. Separate out the rhythm constituents from the total series as suggested. Perform operation F(X) where X is the rhythm vector corresponding to the digital fourier series and F is a 'complicated' timestretch operation. similarly perform operation G(Z-X) = G(Y) with G being probably a different 'complicated' timestretch operation. I suspect we can add a variety of constraints onto F,G to simplify the goal but this is where I start having to pick up a book....perhaps way earlier.. Quote Link to comment Share on other sites More sharing options...
SkeletonMan Posted February 17, 2006 Share Posted February 17, 2006 Well, that's a start. But, it is a given more or less that each channel has a Zx1 vector I assume right? Each component in the channel vector being a 'bit' (0 or 1) of course. Begin conjecturing ramble: Next I guess we need to interpolate the meaningless bits into an adequately precise analog fourier series and then find the closest digital fit to that series. This is an elaboration on what you mentioned, as is: now we take the digitally measured frequencies (from the original gear of reasonable facsimile) and find the fourier series associated with the rhythm components of the composition. Separate out the rhythm constituents from the total series as suggested. Perform operation F(X) where X is the rhythm vector corresponding to the digital fourier series and F is a 'complicated' timestretch operation. similarly perform operation G(Z-X) = G(Y) with G being probably a different 'complicated' timestretch operation. I suspect we can add a variety of constraints onto F,G to simplify the goal but this is where I start having to pick up a book....perhaps way earlier.. 441606[/snapback] And you are not overlooking now to develop the cursor in bi-overstepping vector simultation there? Say, let, sound be X divided by the number of notes f(x) the equation of which would be the interpolation of vector Z (Z off course being what would come out of the loud speakers ) ... I'm just glad that there are people out there like you who knows how to do this shite so the SkeletonMan gets some music to dance to ... Hope no offense taken guys !!! Quote Link to comment Share on other sites More sharing options...
SkeletonMan Posted February 18, 2006 Share Posted February 18, 2006 And you are not overlooking now to develop the cursor in bi-overstepping vector simultation there? Say, let, sound be X divided by the number of notes f(x) the equation of which would be the interpolation of vector Z (Z off course being what would come out of the loud speakers ) ... I'm just glad that there are people out there like you who knows how to do this shite so the SkeletonMan gets some music to dance to ... Hope no offense taken guys !!! 441685[/snapback] Hope the SkeletonMan didn't offend any one there Nah, I'm sure I didn't Quote Link to comment Share on other sites More sharing options...
Amygdala Posted February 18, 2006 Share Posted February 18, 2006 Well, that's a start. But, it is a given more or less that each channel has a Zx1 vector I assume right? Each component in the channel vector being a 'bit' (0 or 1) of course. Begin conjecturing ramble: Next I guess we need to interpolate the meaningless bits into an adequately precise analog fourier series and then find the closest digital fit to that series. This is an elaboration on what you mentioned, as is: now we take the digitally measured frequencies (from the original gear of reasonable facsimile) and find the fourier series associated with the rhythm components of the composition. Separate out the rhythm constituents from the total series as suggested. Perform operation F(X) where X is the rhythm vector corresponding to the digital fourier series and F is a 'complicated' timestretch operation. similarly perform operation G(Z-X) = G(Y) with G being probably a different 'complicated' timestretch operation. I suspect we can add a variety of constraints onto F,G to simplify the goal but this is where I start having to pick up a book....perhaps way earlier.. 441606[/snapback] Oh no, it's not really that complicated and difficult. You don't need to have any rhythm-detection and such, to write a decent timestretch... Also, that would not work with non-rhythmic sound. An FFT based solution is to compute the discrete fourier transformation (can be done easily in matlab, or sh*t-fast in C with the FFTW library) - and then insert more frames than the original sound has to stretch, or remove some frames to time-compress - and finally applying the inverse FFT. A niftier solution is to interpolate between frames though. Then there is the issue of phases, which often need heavy correction - but it's not that hard... If you want, I can dig out some cool articles - else, look for Jean Laroche and Michael Dolson. Many algorithms (spelling?!?) work entirely in the tiime domain - still without rhythm detection. Granular solutions are good. -A Quote Link to comment Share on other sites More sharing options...
niobium Posted February 18, 2006 Share Posted February 18, 2006 Hope the SkeletonMan didn't offend any one there Nah, I'm sure I didn't 442869[/snapback] No, no, I still love you Quote Link to comment Share on other sites More sharing options...
niobium Posted February 18, 2006 Share Posted February 18, 2006 Oh no, it's not really that complicated and difficult. You don't need to have any rhythm-detection and such, to write a decent timestretch... Also, that would not work with non-rhythmic sound. An FFT based solution is to compute the discrete fourier transformation (can be done easily in matlab, or sh*t-fast in C with the FFTW library) - and then insert more frames than the original sound has to stretch, or remove some frames to time-compress - and finally applying the inverse FFT. A niftier solution is to interpolate between frames though. Then there is the issue of phases, which often need heavy correction - but it's not that hard... If you want, I can dig out some cool articles - else, look for Jean Laroche and Michael Dolson. Many algorithms (spelling?!?) work entirely in the tiime domain - still without rhythm detection. Granular solutions are good. -A 442961[/snapback] Yeah, I was thinking more along the lines of 'interpolating between the frames' as you said. Thanks for quenching my overly complicated notion on it with a much tidier explanation. I will have a look at Laroche and Dolson. Thanks Amygdala Quote Link to comment Share on other sites More sharing options...
Amygdala Posted February 19, 2006 Share Posted February 19, 2006 Anytime Actually, I did my master thesis on FFT, used for pitch-shifting. Is pretty much the same as time-stretching, since if you can timestretch perfectly, then you can pitch-shift perfectly - and vice versa. I made a realtime algorithm extending the SuperCollider language. Great fun, and I leanred ALOT of math in the process! -A Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.