Start a new topic

ChatGPT v Orba 1

ChatGPT v Orba 1

Part 1

Around page 22 of the "Orba hacking knowledge base", a year or so ago, me and @Subskybox were dissecting the eventData string the Orba 1 uses to represent sequences. @Subsky did some clever mathematical analysis while I did the donkey work of setting up experiments and recording the results.


Some of the experiments were based on a song called "DPC" which played the first seven notes of a minor scale. I've attached the song file, console output, and a spreadsheet @Subsky put together after analysing the data.

The eventData string is a mix of note and performance data, but this "DPC" test simplifies things to only include note data. This is organised as a series of "note blocks":

Note Block 1:

PlayNote: 16

startTicksLSB: 7

startTicksMSB: 0

Note #: 62

Vel On: 120

Vel Off: 90

DurTicksLSB: -11

DurTicksMSB: 1

Note Block 2:

PlayNote: 16

startTicksLSB: 89

startTicksMSB: 7

Note #: 64

Vel On: 127

Vel Off: 92

DurTicksLSB: -17

DurTicksMSB: 1

Note Block 3:


PlayNote: 16

startTicksLSB: -105

startTicksMSB: 7

Note #: 65

Vel On: 113

Vel Off: 92

DurTicksLSB: -46

DurTicksMSB: 3

Note Block 4:


PlayNote: 16

startTicksLSB: -122

startTicksMSB: 7

Note #: 67

Vel On: 121

Vel Off: 80

DurTicksLSB: -31

DurTicksMSB: 3

Note Block 5:


PlayNote: 16

startTicksLSB: 108

startTicksMSB: 7

Note #: 69

Vel On: 118

Vel Off: 58

DurTicksLSB: -91

DurTicksMSB: 1

Note Block 6:


PlayNote: 16

startTicksLSB: -100

startTicksMSB: 7

Note #: 70

Vel On: 127

Vel Off: 91

DurTicksLSB: -20

DurTicksMSB: 1

Note Block 7:


PlayNote: 16

startTicksLSB: 113

startTicksMSB: 7

Note #: 72

Vel On: 87

Vel Off: 55

DurTicksLSB: 116

DurTicksMSB: 1

If you take this series of values and encode them as a Base64 string, you get the corresponding following eventData string from the .song file:

"EAcAPnha9QMQWQdAf1zvAxCXB0FxXNIFEIYHQ3lQ4QUQbAdFdjqlAxCcB0Z/W+wBEHEHSFc3dAE="

This appears in the .song XML as follows:

<LoopData writeIndex="56" recordStartTime="0" recordStopTime="11882" lastEventTime="4809"

nBars="7" eventData="EAcAPnha9QMQWQdAf1zvAxCXB0FxXNIFEIYHQ3lQ4QUQbAdFdjqlAxCcB0Z/W+wBEHEHSFc3dAE="

eventDataCrc="1ff6d4c4"/>

The problem we found is that the timing data is relative...the timing of each note, ie when it plays, is affected by the one before. That makes real-time quantisation a bit of a nightmare. It might be posisble to implement "offline" quantisation, processing a .song file to quantise the data, or create new sequences based on MIDI data, but it's a hassle and we pretty much abandoned the investigation at that point.
 
A few months later, ChatGPT arrived on the scene...

 

 

 

 

 

 

song
(31.2 KB)
txt
(1.28 KB)
xlsx

1 person likes this idea

Cheers; it's mainly a programming tutorial for me, trying to pick up a bit of Python. Look forward to checking out the stuff you wrote for Orba 2.

I decided to play with ChatGPT using examples you've provided and coached it to provide this:


 

import base64
import struct

base64_string = 'EAcAPnha9QMQWQdAf1zvAxCXB0FxXNIFEIYHQ3lQ4QUQbAdFdjqlAxCcB0Z/W+wBEHEHSFc3dAE='

# Decode the Base64 string
decoded_bytes = base64.b64decode(base64_string)

# Convert the decoded bytes to an array of unsigned integers
unsigned_int_array = struct.unpack('B' * len(decoded_bytes), decoded_bytes)

# Group values by 16
grouped_array = []
temp_group = []
for num in unsigned_int_array:
    temp_group.append(num)
    if num == 16 and len(temp_group) == 8:
        grouped_array.append(temp_group)
        temp_group = []

print(grouped_array)

 

The Output is as expected:

 

[[16, 7, 0, 62, 120, 90, 245, 1], [16, 89, 7, 64, 127, 92, 239, 1], [16, 151, 7, 65, 113, 92, 210, 3], [16, 134, 7, 67, 121, 80, 225, 3], [16, 108, 7, 69, 118, 58, 165, 1], [16, 156, 7, 70, 127, 91, 236, 1], [16, 113, 7, 72, 87, 55, 116, 1]]

 

This generated code is nice & clean and avoids conditional negative numbers. It is best to read these values as unsigned ints since they should all be positive values. I had negative numbers from way back because it was easier to understand when transposing a note up(+) or down(-). For songs, we will never need that. This code is also a good starting point to pick out other data structures from Base64 stings like CC values and Pitch Bend data. Most values are in the range 0-127 but Pitch Bend has a bigger range which required 2 bytes. This is why more bytes are required.


1 person likes this

Thanks for that. Yes, that's a better way to represent the data.

I found I was hitting the Orba 1 note limit with some of the MIDI files I was converting. Someone on the FB group asked if the Orba 2 would provide more capacity for this and I was curious to see if it would, and whether sequence data was represented in the same way, which is one of the reasons I decided to pick one up. Another was to see if CHatGPT might be able to progress the efforts to create a decent system for mapping samples.

I also wanted to see if the synth engine is identical. I dunno, not sure if the synth engine is even based on the same processor, but I presume so. And no-one ever made an editor for drum sounds, so I was curious to look into that as well.

Here's the latest version of this utility. I was able to download a MIDI file of Gershwin's 3rd Prelude from here:

https://alevy.com/gershwin.htm

...then run "py convert.py prelude3.mid".

This generates a loopData XML entry which can be swapped into a song (Ohm in this case) and plays the track. ("ger3", tempo 50.) 

zip
(21.1 KB)

Just unboxed a new Orba 2. While they have their problems, I'm still pleased with it. :-)


I copied the loopData from Scotland The Brave into an .artisong and it was recognisable, so that's a promising start.

Login or Signup to post a comment