[microsound] 4C+29.30: Black Hole Powered Jets

Boris Klompus boris.klompus at gmail.com
Mon Jun 17 02:29:17 EDT 2013


Awesome. Thanks for the explanation.

So the tif file is the end result that produced the sound right?

The edge detection image leaves an interesting result.. what are the lines
meant to illustrate? Do they surround fields within the original image
where the most unique light activity occurs, or is the line itself the
place where the light (not sure how to understand edge detection in terms
of light or images actually -- basing my understanding on programming..)...?

Boris


On Mon, Jun 17, 2013 at 2:06 AM, Dara Shayda <dara1339 at hotmail.com> wrote:

> Hello  Boris
>
> Good to hear from you again.
>
> Before I explain everything you need is here:
> http://metasynth.com/wikifarm/metasynth/doku.php?id=blog
>
> Edward Spiegel is the genius behind the software and a musician himself.
>
>
> The original image was obtained from NASA Chanrda X-Ray satellite in form
> of jpg. But its structure is unclear to naked eye. In order to find out its
> underlying structure you need to image process via the known filters e.g.
> Edge Detection filter which I used the implementation in Mathematica 9.0.
>
> So you transform the original NASA image into the line drawn edge detected
> image and feed the latter to the ImageSynth.
>
> This processed image then scanned from LEFT to RIGHT vertically:
>
> 1. x-axis is the progression of time from left  to right (from 0 to length
> of the image)
> 2. y-axis is the frequency range, but this mapping is not standard like
> your piano keyboard, you can change the mapping of the y-coordinate to
> frequencies using the MAP in MetaSynth (I included the map in the script at
> the end of the sound cloud post)
>
> 3. Intensity of the pixel is the volume
> 4. The RED and GREEN are the amplitudes of the stereo LEFT and RIGHT
> channels. I usually use just Yellow which is equal amplitude in both
> channels
>
> 5. You can also assign a synthesizer to every pixel of the said image e.g.
> human phoneme or twang of a string or puff on a flue, so this way you are
> not using just raw sounds but rather actual instruments
>
> Basically in this scenario the pixels in the image become the notes on a
> gigantic sheet-music like a piano-roll. You can flip the image or reverse
> it or add echoes to it or reverb by geometrically transforming the image
> using the usual transforms.
>
> Once you did all that you serialize the audio generation from the left to
> right and generate the sound file as usual.
>
> Let me know if you need more info
> D
>
>
>
> On 2013-06-17, at 1:34 AM, Boris Klompus <boris.klompus at gmail.com> wrote:
>
> Hi Dara,
>
> I'm enjoying going through these.
> I'm curious of your process - you started with the image into MetaSynth ..
> I'm pretty unfamiliar with it. Does it do color to frequency, or do you use
> it to modulate a synthesizer? What's the edge detection image, etc.  Also,
> how do you iterate and deal with the time dimension as derived from the
> image(s)?
>
> Does the code at the bottom of your soundcloud post answer all of my
> questions?
>
> Boris
>
>
> On Sun, Jun 16, 2013 at 7:49 AM, Dara Shayda <dara1339 at hotmail.com> wrote:
>
>> https://soundcloud.com/dara-o-shayda/4c2930
>> _______________________________________________
>> microsound mailing list
>> microsound at microsound.org
>> http://or8.net/mailman/listinfo/microsound
>>
>
> _______________________________________________
> microsound mailing list
> microsound at microsound.org
> http://or8.net/mailman/listinfo/microsound
>
>
>
> _______________________________________________
> microsound mailing list
> microsound at microsound.org
> http://or8.net/mailman/listinfo/microsound
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://or8.net/pipermail/microsound/attachments/20130617/a48e51e3/attachment.html>


More information about the microsound mailing list