From dara1339 at hotmail.com Wed Jun 12 05:51:20 2013 From: dara1339 at hotmail.com (Dara Shayda) Date: Wed, 12 Jun 2013 05:51:20 -0400 Subject: [microsound] Annular Eclipse Motet Message-ID: https://soundcloud.com/dara-o-shayda/annular-eclipse-chorus From dara1339 at hotmail.com Wed Jun 12 13:11:57 2013 From: dara1339 at hotmail.com (Dara Shayda) Date: Wed, 12 Jun 2013 13:11:57 -0400 Subject: [microsound] Annular Eclipse: Ensemble Message-ID: https://soundcloud.com/dara-o-shayda/annular-eclipse-ensemble From dara1339 at hotmail.com Sat Jun 15 03:39:43 2013 From: dara1339 at hotmail.com (Dara Shayda) Date: Sat, 15 Jun 2013 03:39:43 -0400 Subject: [microsound] Annular Eclipse: Counterpoint 2nd Species Message-ID: https://soundcloud.com/dara-o-shayda/annular-eclipse-counterpoint From dara1339 at hotmail.com Sun Jun 16 07:49:43 2013 From: dara1339 at hotmail.com (Dara Shayda) Date: Sun, 16 Jun 2013 07:49:43 -0400 Subject: [microsound] 4C+29.30: Black Hole Powered Jets Message-ID: https://soundcloud.com/dara-o-shayda/4c2930 From boris.klompus at gmail.com Mon Jun 17 01:34:14 2013 From: boris.klompus at gmail.com (Boris Klompus) Date: Mon, 17 Jun 2013 01:34:14 -0400 Subject: [microsound] 4C+29.30: Black Hole Powered Jets In-Reply-To: References: Message-ID: Hi Dara, I'm enjoying going through these. I'm curious of your process - you started with the image into MetaSynth .. I'm pretty unfamiliar with it. Does it do color to frequency, or do you use it to modulate a synthesizer? What's the edge detection image, etc. Also, how do you iterate and deal with the time dimension as derived from the image(s)? Does the code at the bottom of your soundcloud post answer all of my questions? Boris On Sun, Jun 16, 2013 at 7:49 AM, Dara Shayda wrote: > https://soundcloud.com/dara-o-shayda/4c2930 > _______________________________________________ > microsound mailing list > microsound at microsound.org > http://or8.net/mailman/listinfo/microsound > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dara1339 at hotmail.com Mon Jun 17 02:06:22 2013 From: dara1339 at hotmail.com (Dara Shayda) Date: Mon, 17 Jun 2013 02:06:22 -0400 Subject: [microsound] 4C+29.30: Black Hole Powered Jets In-Reply-To: References: Message-ID: Hello Boris Good to hear from you again. Before I explain everything you need is here: http://metasynth.com/wikifarm/metasynth/doku.php?id=blog Edward Spiegel is the genius behind the software and a musician himself. The original image was obtained from NASA Chanrda X-Ray satellite in form of jpg. But its structure is unclear to naked eye. In order to find out its underlying structure you need to image process via the known filters e.g. Edge Detection filter which I used the implementation in Mathematica 9.0. So you transform the original NASA image into the line drawn edge detected image and feed the latter to the ImageSynth. This processed image then scanned from LEFT to RIGHT vertically: 1. x-axis is the progression of time from left to right (from 0 to length of the image) 2. y-axis is the frequency range, but this mapping is not standard like your piano keyboard, you can change the mapping of the y-coordinate to frequencies using the MAP in MetaSynth (I included the map in the script at the end of the sound cloud post) 3. Intensity of the pixel is the volume 4. The RED and GREEN are the amplitudes of the stereo LEFT and RIGHT channels. I usually use just Yellow which is equal amplitude in both channels 5. You can also assign a synthesizer to every pixel of the said image e.g. human phoneme or twang of a string or puff on a flue, so this way you are not using just raw sounds but rather actual instruments Basically in this scenario the pixels in the image become the notes on a gigantic sheet-music like a piano-roll. You can flip the image or reverse it or add echoes to it or reverb by geometrically transforming the image using the usual transforms. Once you did all that you serialize the audio generation from the left to right and generate the sound file as usual. Let me know if you need more info D On 2013-06-17, at 1:34 AM, Boris Klompus wrote: > Hi Dara, > > I'm enjoying going through these. > I'm curious of your process - you started with the image into MetaSynth .. I'm pretty unfamiliar with it. Does it do color to frequency, or do you use it to modulate a synthesizer? What's the edge detection image, etc. Also, how do you iterate and deal with the time dimension as derived from the image(s)? > > Does the code at the bottom of your soundcloud post answer all of my questions? > > Boris > > > On Sun, Jun 16, 2013 at 7:49 AM, Dara Shayda wrote: > https://soundcloud.com/dara-o-shayda/4c2930 > _______________________________________________ > microsound mailing list > microsound at microsound.org > http://or8.net/mailman/listinfo/microsound > > _______________________________________________ > microsound mailing list > microsound at microsound.org > http://or8.net/mailman/listinfo/microsound -------------- next part -------------- An HTML attachment was scrubbed... URL: From boris.klompus at gmail.com Mon Jun 17 02:29:17 2013 From: boris.klompus at gmail.com (Boris Klompus) Date: Mon, 17 Jun 2013 02:29:17 -0400 Subject: [microsound] 4C+29.30: Black Hole Powered Jets In-Reply-To: References: Message-ID: Awesome. Thanks for the explanation. So the tif file is the end result that produced the sound right? The edge detection image leaves an interesting result.. what are the lines meant to illustrate? Do they surround fields within the original image where the most unique light activity occurs, or is the line itself the place where the light (not sure how to understand edge detection in terms of light or images actually -- basing my understanding on programming..)...? Boris On Mon, Jun 17, 2013 at 2:06 AM, Dara Shayda wrote: > Hello Boris > > Good to hear from you again. > > Before I explain everything you need is here: > http://metasynth.com/wikifarm/metasynth/doku.php?id=blog > > Edward Spiegel is the genius behind the software and a musician himself. > > > The original image was obtained from NASA Chanrda X-Ray satellite in form > of jpg. But its structure is unclear to naked eye. In order to find out its > underlying structure you need to image process via the known filters e.g. > Edge Detection filter which I used the implementation in Mathematica 9.0. > > So you transform the original NASA image into the line drawn edge detected > image and feed the latter to the ImageSynth. > > This processed image then scanned from LEFT to RIGHT vertically: > > 1. x-axis is the progression of time from left to right (from 0 to length > of the image) > 2. y-axis is the frequency range, but this mapping is not standard like > your piano keyboard, you can change the mapping of the y-coordinate to > frequencies using the MAP in MetaSynth (I included the map in the script at > the end of the sound cloud post) > > 3. Intensity of the pixel is the volume > 4. The RED and GREEN are the amplitudes of the stereo LEFT and RIGHT > channels. I usually use just Yellow which is equal amplitude in both > channels > > 5. You can also assign a synthesizer to every pixel of the said image e.g. > human phoneme or twang of a string or puff on a flue, so this way you are > not using just raw sounds but rather actual instruments > > Basically in this scenario the pixels in the image become the notes on a > gigantic sheet-music like a piano-roll. You can flip the image or reverse > it or add echoes to it or reverb by geometrically transforming the image > using the usual transforms. > > Once you did all that you serialize the audio generation from the left to > right and generate the sound file as usual. > > Let me know if you need more info > D > > > > On 2013-06-17, at 1:34 AM, Boris Klompus wrote: > > Hi Dara, > > I'm enjoying going through these. > I'm curious of your process - you started with the image into MetaSynth .. > I'm pretty unfamiliar with it. Does it do color to frequency, or do you use > it to modulate a synthesizer? What's the edge detection image, etc. Also, > how do you iterate and deal with the time dimension as derived from the > image(s)? > > Does the code at the bottom of your soundcloud post answer all of my > questions? > > Boris > > > On Sun, Jun 16, 2013 at 7:49 AM, Dara Shayda wrote: > >> https://soundcloud.com/dara-o-shayda/4c2930 >> _______________________________________________ >> microsound mailing list >> microsound at microsound.org >> http://or8.net/mailman/listinfo/microsound >> > > _______________________________________________ > microsound mailing list > microsound at microsound.org > http://or8.net/mailman/listinfo/microsound > > > > _______________________________________________ > microsound mailing list > microsound at microsound.org > http://or8.net/mailman/listinfo/microsound > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dara1339 at hotmail.com Mon Jun 17 02:36:09 2013 From: dara1339 at hotmail.com (Dara Shayda) Date: Mon, 17 Jun 2013 02:36:09 -0400 Subject: [microsound] 4C+29.30: Black Hole Powered Jets In-Reply-To: References: Message-ID: Hello Boris The metasynth does its own image processing filters and transformations, in case I doubly used its (on top of Mathematica) I save the image processed files as .tiff. So others can produce the sounds in similar fashion. The final sound is a .caf file which I usually do not publish, but if you need them write me. Edge Detect: http://reference.wolfram.com/mathematica/ref/EdgeDetect.html This is the Mathematica manual on the gradient method (calculus gradient operator) used to detect sudden changes in a 2 dimensional function. Their manual is the best explanation with many examples. On 2013-06-17, at 2:29 AM, Boris Klompus wrote: > Awesome. Thanks for the explanation. > > So the tif file is the end result that produced the sound right? > > The edge detection image leaves an interesting result.. what are the lines meant to illustrate? Do they surround fields within the original image where the most unique light activity occurs, or is the line itself the place where the light (not sure how to understand edge detection in terms of light or images actually -- basing my understanding on programming..)...? > > Boris > > > On Mon, Jun 17, 2013 at 2:06 AM, Dara Shayda wrote: > Hello Boris > > Good to hear from you again. > > Before I explain everything you need is here: > http://metasynth.com/wikifarm/metasynth/doku.php?id=blog > > Edward Spiegel is the genius behind the software and a musician himself. > > > The original image was obtained from NASA Chanrda X-Ray satellite in form of jpg. But its structure is unclear to naked eye. In order to find out its underlying structure you need to image process via the known filters e.g. Edge Detection filter which I used the implementation in Mathematica 9.0. > > So you transform the original NASA image into the line drawn edge detected image and feed the latter to the ImageSynth. > > This processed image then scanned from LEFT to RIGHT vertically: > > 1. x-axis is the progression of time from left to right (from 0 to length of the image) > 2. y-axis is the frequency range, but this mapping is not standard like your piano keyboard, you can change the mapping of the y-coordinate to frequencies using the MAP in MetaSynth (I included the map in the script at the end of the sound cloud post) > > 3. Intensity of the pixel is the volume > 4. The RED and GREEN are the amplitudes of the stereo LEFT and RIGHT channels. I usually use just Yellow which is equal amplitude in both channels > > 5. You can also assign a synthesizer to every pixel of the said image e.g. human phoneme or twang of a string or puff on a flue, so this way you are not using just raw sounds but rather actual instruments > > Basically in this scenario the pixels in the image become the notes on a gigantic sheet-music like a piano-roll. You can flip the image or reverse it or add echoes to it or reverb by geometrically transforming the image using the usual transforms. > > Once you did all that you serialize the audio generation from the left to right and generate the sound file as usual. > > Let me know if you need more info > D > > > > On 2013-06-17, at 1:34 AM, Boris Klompus wrote: > >> Hi Dara, >> >> I'm enjoying going through these. >> I'm curious of your process - you started with the image into MetaSynth .. I'm pretty unfamiliar with it. Does it do color to frequency, or do you use it to modulate a synthesizer? What's the edge detection image, etc. Also, how do you iterate and deal with the time dimension as derived from the image(s)? >> >> Does the code at the bottom of your soundcloud post answer all of my questions? >> >> Boris >> >> >> On Sun, Jun 16, 2013 at 7:49 AM, Dara Shayda wrote: >> https://soundcloud.com/dara-o-shayda/4c2930 >> _______________________________________________ >> microsound mailing list >> microsound at microsound.org >> http://or8.net/mailman/listinfo/microsound >> >> _______________________________________________ >> microsound mailing list >> microsound at microsound.org >> http://or8.net/mailman/listinfo/microsound > > > _______________________________________________ > microsound mailing list > microsound at microsound.org > http://or8.net/mailman/listinfo/microsound > > > _______________________________________________ > microsound mailing list > microsound at microsound.org > http://or8.net/mailman/listinfo/microsound -------------- next part -------------- An HTML attachment was scrubbed... URL: From dara1339 at hotmail.com Mon Jun 17 04:38:31 2013 From: dara1339 at hotmail.com (Dara Shayda) Date: Mon, 17 Jun 2013 04:38:31 -0400 Subject: [microsound] 4c+29.30: Watershed Components Message-ID: https://soundcloud.com/dara-o-shayda/4c2930watershed