Martin Howse

tags: PARTICIPANTS

martin development

27/11
Martin unable to come to taiwan due to health issues. He sent notes.
https://hackmd.io/kFejznSlT3CVr_UxDID9Zw

Martin sound notes

Day One

  • Checking lists - we need small mixer and monitor speakers
  • Making test recordings with USB microphone: I suggest as many layers as possible to uncouple all, also placed on foam on concrete floor
  • We have problem with audacity not registering signals above 20KHz - check this with baudline and there we have a nice full spectrum and can also record files
  • Working later on Python code which might allow bugs to compose the libretto
  • We need more soundproofing

Image Not Showing Possible Reasons
  • The image file may be corrupted
  • The server hosting the image is unavailable
  • The image path is incorrect
  • The image format is not supported
Learn More →

Day Two

We are interested in: Sitophilus oryzae !

  • Piezo workings trying also piezo radio transmitter - we will always have feedback issues so how to exploit these with the help of the bugs?

Day Three

TODO: test other radio transmitters, test divide down circuit

See also for downsamplimg and filtering:

https://tai-studio.org/index.php/2011/03/materials-for-tais/

some SC code to highpass and delay:

(
{
var in;
var signal;
signal = AudioIn.ar(1);
d=DelayL.ar(
HPF.ar(signal, 4000.0, 40),2,2
);
//Out.ar(0,in);

(d).dup

}.play(s)

)

Some code to generate libretto from bug energy:

# for more recent NLTK which doesn't have bm=NgramModel(2,cry) # bigram model
# this one is working!
# fixed now for python3

# python3 bug_libretto.py

# http://www.nltk.org/api/nltk.lm.html#module-nltk.lm

# you may need to apply fix here to portaudio: https://stackoverflow.com/questions/59006083/how-to-install-portaudio-on-pi-properly

# generates libretto from sound source energy of bugs

from nltk import *
from time import sleep
import math
import random
import time
#import serial
import codecs
import os
import subprocess as sp
from nltk.lm import MLE
from nltk.util import bigrams
from nltk.lm.preprocessing import padded_everygram_pipeline
import unicodedata
import struct
import numpy as np
import sounddevice as sd
from scipy.signal import butter, lfilter

def butter_bandpass(lowcut, highcut, fs, order=5):
    nyq = 0.5 * fs
    low = lowcut / nyq
    high = highcut / nyq
    b, a = butter(order, [low, high], btype='band')
    return b, a    

def butter_bandpass_filter(data, lowcut, highcut, fs, order=5):
    b, a = butter_bandpass(lowcut, highcut, fs, order=order)
    y = lfilter(b, a, data)
    return y


def convert_accents(text):
    return unicodedata.normalize('NFKD', text).encode('ascii', 'ignore')

#soundings
INTERVAL = 1
CHANNELS = 1
RATE = 48000
CHUNK = int(RATE * INTERVAL)
CHUNK=4000
sd.default.samplerate = RATE
sd.default.device = 3 # this is the mic

#letterings
#f = codecs.open("/root/projects/archived/worms_txt/chants","r","utf-8")
f = codecs.open("/root/notes_and_projectsNOW/bug/dongio","r","utf-8") # wherever you have the source text file to get letter/letter frequencies from - longer is better. here don giovanni
#f = codecs.open("/root/projects/archived/worms_txt/short")
cry = f.read()
#cry=[y.lower() for y in cry]
#cry=[convert_accents(word) for word in cry]

uf=FreqDist(l for l in cry)
up=DictionaryProbDist(uf,normalize=True)
#bm=NgramModel(2,cry) # bigram model
#lm.fit(cry, vocab)

#print list(bigrams(text[0]))
#print list(bigrams(cry))

# what we need from text is:  a text that is a list of sentences, where each sentence is a list of strings.
# eg. text = [['a', 'b', 'c'], ['a', 'c', 'd', 'c', 'e', 'f']]

sentt=[]
 
for sent in sent_tokenize(cry):
    letterss=[]
    #sentt.append(sent)
    for letters in sent:
        letterss.append(letters)
    sentt.append(letterss)
    
#print sentt
#cryyy = list(cryy)

#print cryy

train, vocab = padded_everygram_pipeline(2, sentt)
lm = MLE(2)
lm.fit(train, vocab)
#print lm.vocab 

#for prob?
#print lm.score("a",list("b"))

uf=FreqDist(l for l in cry)
up=DictionaryProbDist(uf,normalize=True)
#print uf
#bm=NgramModel(2,cry) # bigram model

#print lm.generate(5, text_seed=['c'], random_seed=3))

def genereate(dct,model,letter,n):
    probb=[]
    result=''
    total=''
    line=''
    for x in range(n):
        for l in dct.samples():
            prob=model.logscore(l,tuple(letter))
#            print prob, letter, l
            probb.append((prob,l))
        probb.sort()
        probb.reverse()
        #        print probb
        #        line = ser.read(1)
        #        line = random.randint(0,32)
        #        line=1
        # line is our line now spectral
        # problem is to overlap our chunks otherwise we have to wait so long
        data = sd.rec(CHUNK, samplerate=RATE, channels=1)
        sd.wait()
        #    y = np.frombuffer(data, dtype='b')
        y=data
        oneD_array_of_amps_of_fiveHz_component = butter_bandpass_filter(y, 500, 1000, 4000, 5)
        #calculate energy like this
        energy_of_fiveHz_comp = sum([x*2 for x in oneD_array_of_amps_of_fiveHz_component])
        nummm = int(1000.0*energy_of_fiveHz_comp)
        if nummm<0:
            nummm=2;
        if nummm>=len(probb):
            nummm=len(probb)-1;
        letter=probb[random.randint(0,int(nummm))][1]
        result=result+letter
        probb=[]
        if letter=="\n":
            return result[0:-1]
    return result[0:-1]

inpt=time.time()
#ser = serial.Serial('/dev/ttyACM0', 9600, timeout=1)
seedletter='e'

fullon=[]
count=0

#pr=lm.generate(2000) # this seems to work with our model but not our generate which lacks spaces for one thing
#print ''.join(pr)
for x in range(100):
    x=genereate(up,lm,seedletter,64) # max length of line
    print(x)

More supercollider with downsampling:

(
o = Server.default.options;
o.sampleRate=96000;
Server.default.reboot;
)

(
{
	var in;
	var bits=16; // bits=16, downsamp=2|
	var down;
	var downsamp=4;
	var signal;
	signal = AudioIn.ar(1);
	in = signal.round(0.5 ** bits);
	down = Latch.ar(
		in,
		Impulse.ar(SampleRate.ir / downsamp.max(4))
	);
	signal=blend(in, down, (downsamp - 1).clip(0, 1));
	d=DelayL.ar(
		HPF.ar(signal, 4000.0, 40),2,2
	);
	//Out.ar(0,in);

	(d).dup

}.play(s)
)

And borrowed from Fredrik LFSR with input:

(
var w, a;
w= Window("lfsr", Rect(100, 100, 520, 200)).front;
Slider(w, Rect(10, 10, 500, 25)).action_({|view| a.set(\rate, view.value*2000)}).value= 400/2000;
Slider(w, Rect(10, 40, 500, 25)).action_({|view| a.set(\length, view.value*32)}).value= 16/32;
a= {|rate= 400, iseed= 2r1000, tap1= 1, tap2= 3, tap3= 5, length= 16|
    var l, b, trig, o, signal, in;
    var buf= LocalBuf(1);
    buf.set(iseed);
    trig= Impulse.ar(rate);
    l= Demand.ar(trig, 0, Dbufrd(buf));  //read
    //b= l.bitXor(l>>tap1).bitXor(l>>tap2).bitXor(l>>tap3)&1;  //modify
	signal = AudioIn.ar(1);
	in = signal.round(0.5 ** 8);
	//b= l.bitXor(l>>tap1).bitXor(l>>tap2).bitXor(l>>tap3).bitXor(in)&1;  //modify
	b= (in)&1;  //modify

	l= (l>>1)|(b<<(length-1));  //lfsr
    Demand.ar(trig, 0, Dbufwr(l, buf));  //write
    o= PulseCount.ar(Impulse.ar(rate*length), trig);  //bits
    l>>o&1!2;  //output
}.play;
CmdPeriod.doOnce({w.close});

Day Four

Streaming in for performance - I use remote jitsii stream from ricebugs in remote/quieter location with supercollider (code is more or less above with addition of sliders for volume and high pass frequency).

Using pulseaudio (pulseaudio -vvv) and qjackctl with extra configuration in qjackctl - and then work with the routing of the pulseaudio sink and source

https://www.celesteh.com/blog/tag/supercollider/

Day Five

Flying bugs!

Setup of remote stream (icecast?) from isolated bug location with USB microphone - also spectral capture.

mplayer -nocache http://example.org:8000/node1 -> qjackctl -> sc

Approaches

Acts/ideas

arecord -D hw:0

  • Sampling and downsampling: mic->highpass->downsample - supercollider

Technical

  • USB ultrasonic microphone and raspberry pi4 (software based)
  • Ultrasonic MEMS microphones and downconvertor - audible signals
  • Ultrasonic MEMS microphones and amplifier sampled by pi4 and audio codec - Raspberry Pi Shield - HiFiBerry DAC+ ADC Pro
  • Radio transmission from vibrations from the bugs
  • Audio communication to the bugs (pi based with small amplifier and Taro speakers)

Notes: vibration damping from below, protection of sensitive mics from moisture/humidity, dealing with ultrasonic signals.

Platform: pd, sc or custom

https://github.com/redFrik/supercolliderStandaloneRPI2

Experiments

  • how to capture and visualise spectrum of bug activity?

https://www.baudline.com/download.html

spectogram view (for track) in audacity

sox sound.wav -n spectrogram (generates png)

python and matplotlib

https://medium.com/quick-code/python-audio-spectrum-analyser-6a3c54ad950

freqscope for sc: https://doc.sccode.org/Classes/FreqScope.html

  • install software for audio on pi - supercollider and jackd are installed, needs puredata or other platform
  • capture and playback ultrasonic audio from bugs (pi and/or laptop), examine feedback and vibration
  • generate high frequency signals from GPIO on pi and interface with speakers

USB microphone

Ultramic UM192K (https://www.dodotronic.com/product/ultramic-um192k/?v=2a47ad90f2ae

Notes - the end has a cap and when removed will need to be protected from moisture/humidity without impacting too much one sensitivity. Same for all microphone elements.

MEMS microphones and downconversion

Bug FM transmitters around 100 MHz

The design has changed a bit - it can be cut on copper foil using a vinyl cutter and easily assembled. Very sensitive to vibration. Tune receiver on FM to 98-105MHz.

I was also thinking some rice could be sprayed with metal copper and this could interfere with the transmission. Needs to be experimented with.

PI4 and HiFiBerry DAC+ ADC Pro audio shield

1 Analogue input, phone jack 3.5mm
2 Analogue output RCA
3 Analogue output (p5)
4 Input configuration jumper (J1)
5 Alternative input connector (P6)

https://www.hifiberry.com/docs/data-sheets/datasheet-dac-adc/

Shipping/ordering

Shipping list:

  • USB ultrasonic microphone: https://www.dodotronic.com/product/ultramic-um192k/?v=2a47ad90f2ae
  • Raspberry PI4 + 16GB sdcard (with pi os installed and audio configured) + power supply (EU plug) + HiFiBerry DAC+ ADC Pro audio shield
  • 2x MEMs microphones: one with downconversion to audible range and 1 with amplifiers for sampling with pi4+ADC (there were 3 but one didn't work - they are very cheap)
  • 4x printed radio transmitters
  • 1x piezo radio transmitter
  • some electronic parts for making amplifiers and transmitters

Shopping list:

similar product in Taiwan

電子材料行

  • audio cables (1.5meters plus):

電子材料行

電子材料行

電子材料行

SparkFun MEMS Microphone Breakout – INMP401 (ADMP401) 麥克風感測器模組

  • basic electronic parts: 2xMCP6002 amplifier (DIP), breadboard, 10x 100k resistors, wire, jumpers

電子材料行

  • Raspberry PI3 + 16GB microSD card + 5v 2.5A power supply with MICRO-USB connector for PI.
  • 2x FM/AM radio recievers like:

電子材料行

  • 4x 9v batteries, 2x AAA batteries

電子材料行