Compare commits

...

6 Commits

15 changed files with 643 additions and 335 deletions

View File

@ -10,6 +10,10 @@ as of now no plugin has ui, so they all just have a few sliders with whatever lo
## building a plugin
because steinberg is an incredibly nice company, i can't actually distribute vst2 plugin binaries without a license, which they don't give out anymore :)
so you're gonna have to build the plugins yourself, but don't worry cause it's relatively easy! (i think)
### macos
to build a plugin, first run:
@ -32,32 +36,32 @@ alternatively, you can run:
./macos.sh packagename
```
this will generate `Packagename.vst` in `build` and copy it directly for you
this will do all three steps directly for you
### other platforms
no idea what you have to do for other platforms, refer to the `vst-rs` [docs](https://github.com/rustaudio/vst-rs). i think you can just build and take the `.so` or `.dll`, but don't quote me on that
no idea what you have to do for other platforms, refer to the `vst-rs` [docs](https://github.com/rustaudio/vst-rs). i think you can just build with `cargo b --package package_name --release` and take the `.so` or `.dll`, but don't quote me on that
PRs are welcome for more build instructions
i also might delete the `build` folder from the repo,
## plugin list
the following is the current list of plugins
- basic_gain: simple gain plugin
- noted: output midi at regular intervals
- sosten: granular sustain plugin
- quinoa: [WIP] granular synthesis plugin
- XLowpass: reimplementation of [Airwindows XLowpass](http://www.airwindows.com/xlowpass/)
- multidim: randomized midi note generator
- robotuna: automatic pitch correction
- hysteria: hysteresis nonlinear effect
- threebandeq: 3 band eq
- threebandwidth: 3 band stereo widener
- tritu: say-goodbye-to-your-audio distortion
- threebandfolding: 3 band wave folding distortion
- `basic_gain`: simple gain plugin
- `noted`: output midi at regular intervals
- `sosten`: granular sustain plugin
- `quinoa`: [WIP] granular synthesis plugin
- `XLowpass`: reimplementation of [Airwindows XLowpass](http://www.airwindows.com/xlowpass/)
- `multidim`: randomized midi note generator
- `robotuna`: automatic pitch correction
- `hysteria`: hysteresis nonlinear effect
- `threebandeq`: 3 band eq
- `threebandwidth`: 3 band stereo widener
- `tritu`: say-goodbye-to-your-audio distortion
- `threebandfolding`: 3 band wave folding distortion
- `double_reverse_delta_inverter`: idk, a weird distortion
- `transmute_pitch`: pitch to midi converter
### basic_gain
@ -93,7 +97,7 @@ since this needs a ui so we can select a file to play, and i still don't know ho
### XLowpass
XLowpass just reimplements [Airwindows XLowpass](http://www.airwindows.com/xlowpass/) in rust. it's a lowpass distortion filter
XLowpass reimplements [Airwindows XLowpass](http://www.airwindows.com/xlowpass/) in rust. it's a lowpass distortion filter
parameters:
- `gain`
@ -187,9 +191,31 @@ parameters:
bands work as they do in `threebandeq`. the `folding freq` parameters control the frequency of the wavefolding. higher is more distortion. use values over 40 if you want to hear white noise
### double-reverse delta inverter
weird kinda distortion that makes loud things quiet and makes quiet things loud
parameters:
- `dry/wet`: dry/wet control
yeah there's not many params in this one, the implementation is pretty straightforward. it does weird things: a square wave of amplitude 1 will just be completely eliminated, while sine waves are distorted
the effect is reversible (it's an [involution](https://en.wikipedia.org/wiki/Involution_(mathematics))), so if you add this plugin twice in a row you get the original signal back. this means you can apply drdi to your audio, apply some other effect, and apply drdi again to get some fun stuff
### transmute midi
analyzes pitch of incoming audio and sends out the corresponding midi note
params:
- `passthrough`: how much of the original audio to be let through
latency should be close to 1024 samples, which is around 20ms i think. the pitch detection is not excellent and it's monophonic, so if you have a noisy input or a bunch of notes at the same time it's anyone's guess what the midi output will be. there's also no pitch bending or anything fancy like that, so it'll jump around notes even if the input has portamento.
aside from that, it works well enough on my quick tests
## contributing
issues and prs are welcome, but please open an issue before making any big pr, i don't wanna have to reject a pr where you have put a lot of effort on
issues and prs are welcome, but please open an issue before making any big pr, i don't wanna have to reject a pr where you have put a lot of effort on. if you are fine with that, ig go ahead i'm not your mum
## license

View File

@ -0,0 +1,11 @@
[package]
name = "double_reverse_delta_inverter"
version = "0.1.0"
edition = "2018"
[lib]
crate-type = ["cdylib"]
[dependencies]
baseplug = { git = "https://github.com/wrl/baseplug.git", rev = "9cec68f31cca9c0c7a1448379f75d92bbbc782a8" }
serde = "1.0.126"

View File

@ -0,0 +1,61 @@
#![allow(incomplete_features)]
#![feature(generic_associated_types)]
use baseplug::{Plugin, ProcessContext};
use serde::{Deserialize, Serialize};
baseplug::model! {
#[derive(Debug, Serialize, Deserialize)]
struct InverterModel {
#[model(min = 0.0, max = 1.0)]
#[parameter(name = "dry/wet")]
dry_wet: f32,
}
}
impl Default for InverterModel {
fn default() -> Self {
Self { dry_wet: 1.0 }
}
}
struct Inverter;
impl Plugin for Inverter {
const NAME: &'static str = "double-reverse delta inverter";
const PRODUCT: &'static str = "double-reverse delta inverter";
const VENDOR: &'static str = "unnieversal";
const INPUT_CHANNELS: usize = 2;
const OUTPUT_CHANNELS: usize = 2;
type Model = InverterModel;
#[inline]
fn new(_sample_rate: f32, _model: &InverterModel) -> Self {
Self
}
#[inline]
fn process(&mut self, model: &InverterModelProcess, ctx: &mut ProcessContext<Self>) {
let input = &ctx.inputs[0].buffers;
let output = &mut ctx.outputs[0].buffers;
for i in 0..ctx.nframes {
let dw = model.dry_wet[i];
output[0][i] = inv(input[0][i]) * dw + (1.0 - dw) * input[0][i];
output[1][i] = inv(input[1][i]) * dw + (1.0 - dw) * input[1][i];
}
}
}
// https://www.desmos.com/calculator/dbjrrevmp0
// this is what the function does
// it's like a mirroring of sorts
fn inv(x: f32) -> f32 {
(1.0 - x.abs()) * x.signum()
}
baseplug::vst2!(Inverter, b"drdi");

View File

@ -1,19 +1,15 @@
#![allow(incomplete_features)]
#![feature(generic_associated_types)]
use std::time::Duration;
use baseplug::{MidiReceiver, Plugin, ProcessContext};
use ringbuf::{Consumer, Producer, RingBuffer};
use serde::{Deserialize, Serialize};
use utils::buffers::*;
use utils::delay::*;
use utils::logs::*;
use utils::pitch::*;
mod tuna;
const BUFFER_LEN: usize = 2 << 10;
const OVERLAP: usize = BUFFER_LEN / 3;
const BUFFER_LEN: usize = 2 << 9;
const DELAY_LEN: usize = 4000;
baseplug::model! {
#[derive(Debug, Serialize, Deserialize)]
@ -21,7 +17,7 @@ baseplug::model! {
#[model(min = 0.0, max = 1.0)]
#[parameter(name = "manual/snap")]
manual: f32,
#[model(min = 0.1, max = 2.0)]
#[model(min = 0.1, max = 2.1)]
#[parameter(name = "frequency gain")]
freq_gain: f32,
}
@ -40,32 +36,22 @@ struct RoboTuna {
/// Current midi note
note: Option<u8>,
/// Current recording buffer
/// Input goes here
recording_buffer: Buffers<BUFFER_LEN>,
/// The next recording buffer we'll use. It gets a bit of the end of the `recording_buffer`
/// so we can do overlap
next_recording_buffer: Buffers<BUFFER_LEN>,
/// Current playing buffer
/// Output comes from here
playing_buffer: Buffers<BUFFER_LEN>,
/// Next playing buffer we'll use
/// We start using it at the end of the previous buffer so we can overlap
next_playing_buffer: Buffers<BUFFER_LEN>,
/// Current pitches
pitch_l: Option<f32>,
pitch_r: Option<f32>,
/// Ringbuf producer so we can send audio chunks to the processing thread
recordings: Producer<tuna::ProcessChunk>,
/// Ringbuf consumer so we can receive processed buffers from the processing threads
processed: Consumer<Buffers<BUFFER_LEN>>,
detector_thread: pitch_detection::PitchDetectorThread<BUFFER_LEN>,
/// Contains some empty buffers so we can reuse them instead of doing allocations
/// Buffers here are not actually empty, since we don't spend any time clearing them
/// But since they will be overwritten, this isn't an issue
empty_buffers: Vec<Buffers<BUFFER_LEN>>,
/// Keeps delay lines for playing
delays: DelayLines<DELAY_LEN>,
/// Floating indexes so we can do interpolation
delay_idx_l: f32,
delay_idx_r: f32,
/// true indexes so we can know how much we're drifting away
true_idx: usize,
}
// LMAO let's go, i think this works
impl Plugin for RoboTuna {
const NAME: &'static str = "robotuna";
const PRODUCT: &'static str = "robotuna";
@ -80,30 +66,23 @@ impl Plugin for RoboTuna {
fn new(_sample_rate: f32, _model: &RoboTunaModel) -> Self {
setup_logging("robotuna.log");
let (recordings, recording_rx) = RingBuffer::<tuna::ProcessChunk>::new(10).split();
let (processed_tx, processed) = RingBuffer::<Buffers<BUFFER_LEN>>::new(10).split();
// Spawn analysis thread
std::thread::spawn(move || {
tuna::tuna(recording_rx, processed_tx);
});
// keep some empty buffer around so we can swap them
let mut empty_buffers = Vec::with_capacity(50);
const BUF: Buffers<BUFFER_LEN> = Buffers::new();
empty_buffers.append(&mut vec![BUF; 30]);
let detector_thread = pitch_detection::PitchDetectorThread::<BUFFER_LEN>::new();
log::info!("finished init");
Self {
note: None,
recording_buffer: Buffers::new(),
next_recording_buffer: Buffers::new(),
playing_buffer: Buffers::new(),
next_playing_buffer: Buffers::new(),
recordings,
processed,
empty_buffers,
pitch_l: None,
pitch_r: None,
detector_thread,
delays: DelayLines::<DELAY_LEN>::new(),
delay_idx_l: 0.0,
delay_idx_r: 0.0,
// We start this at a high number cause idk
// We'll catch up when we start playing
true_idx: 500,
}
}
@ -113,92 +92,142 @@ impl Plugin for RoboTuna {
let output = &mut ctx.outputs[0].buffers;
for i in 0..ctx.nframes {
// Record
// pass input to pitch detector
self.detector_thread
.write(input[0][i], input[1][i], ctx.sample_rate as u32);
// Add to main buffer
let full = self
.recording_buffer
.write_advance(input[0][i], input[1][i]);
// If we're in overlap section, also add to next buffer
if self.recording_buffer.idx > BUFFER_LEN - OVERLAP {
self.next_recording_buffer
.write_advance(input[0][i], input[1][i]);
}
// If we finish the buffer, switch them
if full {
// get the empty buffer from unused buffer list
let mut buf = self
.empty_buffers
.pop()
.expect("should have an empty buffer");
buf.reset();
std::mem::swap(&mut buf, &mut self.recording_buffer);
buf.reset();
std::mem::swap(&mut self.next_recording_buffer, &mut self.recording_buffer);
let _ = self.recordings.push(tuna::ProcessChunk {
buffers: buf,
sample_rate: ctx.sample_rate as u32,
note: self.note,
manual: model.manual[i] <= 0.5,
freq_gain: model.freq_gain[i],
});
// Try to get a processed buffer from the processor thread
if let Some((pitch_l, pitch_r)) = self.detector_thread.try_get_pitch() {
// Update current pitch
// We use `or`, so we keep the old value if the current one is None
self.pitch_l = pitch_l.or(self.pitch_l);
self.pitch_r = pitch_r.or(self.pitch_r);
}
// Play
// Get values from main buffer
let (mut l, mut r, full) = self.playing_buffer.read_advance();
// If we're in overlap section, also play from next buffer
if self.playing_buffer.idx > BUFFER_LEN - OVERLAP {
let (l1, r1, _) = self.next_playing_buffer.read_advance();
// How much into overlap we are, from 0 to 1
let overlap =
(OVERLAP - (BUFFER_LEN - self.playing_buffer.idx)) as f32 / OVERLAP as f32;
// Linearly crossfade
// lineal crossfade works well for two waves that are highly correlated
l *= 1. - overlap;
r *= 1. - overlap;
l += overlap * l1;
r += overlap * r1;
}
// If we finish the buffer, switch them
if full {
// We try to switch like with the recording buffer, but since there might not be a processed
// buffer yet, we do it in a loop and retry every 1 millisecond
// After 10 iterations we give up. Since we didn't swap the buffer, we're gonna play the last one
// again. This isn't ideal, but it's better than silence ig (it might not be, idk)
// The 10 iterations is arbitrary, as is the 1 millisecond wait time
for _ in 0..10 {
if let Some(mut buf) = self.processed.pop() {
buf.reset();
std::mem::swap(&mut buf, &mut self.playing_buffer);
buf.reset();
std::mem::swap(&mut self.next_playing_buffer, &mut self.playing_buffer);
// Stick buf in unused buffer list
self.empty_buffers.push(buf);
// Exit loop
break;
} else {
log::info!("didn't have a processed buffer to swap to, retrying");
}
std::thread::sleep(Duration::from_millis(1));
}
}
// Play from delay line according to pitch
let (l, r) = self.shift(
input[0][i],
input[1][i],
ctx.sample_rate,
model.freq_gain[i],
model.manual[i] < 0.5,
);
output[0][i] = l;
output[1][i] = r;
}
}
}
impl RoboTuna {
fn advancement_rate(&self, freq_gain: f32, manual: bool) -> (f32, f32) {
// TODO Deal with pitch detection failing
let current_pitch_l = self.pitch_l.unwrap_or(220.0);
let current_pitch_r = self.pitch_r.unwrap_or(220.0);
if manual {
// If we're on manual, get the expected frequency from the midi note
if let Some(expected) = self.note.map(midi_note_to_pitch) {
let l = expected / current_pitch_l;
let r = expected / current_pitch_r;
(freq_gain * l, freq_gain * r)
} else {
// If there's no note, we just do frequency gain
(freq_gain, freq_gain)
}
} else {
// If we're on snap, get the closest note
let expected_l = closest_note_freq(current_pitch_l);
let expected_r = closest_note_freq(current_pitch_r);
let l = expected_l / current_pitch_l;
let r = expected_r / current_pitch_r;
(freq_gain * l, freq_gain * r)
}
}
fn shift(
&mut self,
l: f32,
r: f32,
sample_rate: f32,
freq_gain: f32,
manual: bool,
) -> (f32, f32) {
// so um this code will probably not make any sense if i don't write an explanation of the
// general thing it's trying to achieve
// if i've forgoten to write it up and you want to understand the code, ping me and uh yeah
// add input to delay line
self.delays.write_and_advance(l, r);
// get period of left & right
let period_l = sample_rate / self.pitch_l.unwrap_or(220.0);
let period_r = sample_rate / self.pitch_r.unwrap_or(220.0);
// advance indexes
let (adv_l, adv_r) = self.advancement_rate(freq_gain, manual);
self.delay_idx_l += adv_l;
self.delay_idx_r += adv_r;
self.true_idx += 1;
// get how close we are to the input idx, so we know if we have to interpolate/jump
let l_diff = self.true_idx as f32 - self.delay_idx_l;
let r_diff = self.true_idx as f32 - self.delay_idx_r;
// get the current value
let mut l = self.delays.l.floating_index(self.delay_idx_l);
let mut r = self.delays.r.floating_index(self.delay_idx_r);
// Interpolation
// if we are close to having to jump, we start interpolating with the jump destination
// interpolate when we're one third of the period away from jumping
// TODO change to a non-linear interpolation
const DIV: f32 = 2.0 / 3.0;
if l_diff - period_l < (period_l / DIV) {
let a = (l_diff - period_l) / (period_l / DIV);
l *= a;
l += (1.0 - a) * self.delays.l.floating_index(self.delay_idx_l - period_l);
}
if 3.0 * period_l - l_diff < (period_l / DIV) {
let a = (3.0 * period_l - l_diff) / (period_l / DIV);
l *= a;
l += (1.0 - a) * self.delays.l.floating_index(self.delay_idx_l - period_l);
}
if r_diff - period_r < (period_r / DIV) {
let a = (r_diff - period_r) / (period_r / DIV);
r *= a;
r += (1.0 - a) * self.delays.r.floating_index(self.delay_idx_r - period_r);
}
if 3.0 * period_r - r_diff < (period_r / DIV) {
let a = (3.0 * period_r - r_diff) / (period_r / DIV);
r *= a;
r += (1.0 - a) * self.delays.r.floating_index(self.delay_idx_r - period_r);
}
// Check if we need to advance/go back `period` samples
// we want to be between the second and third period
// so ideally we want {l,r}_diff == 2.0 * period_{l,r}
// We are about to get to the first period
if l_diff < period_l {
self.delay_idx_l -= period_l;
}
// We are about to get to the fourth period
if l_diff > 3.0 * period_l {
self.delay_idx_l += period_l;
}
if r_diff < period_r {
self.delay_idx_r -= period_r;
}
if r_diff > 3.0 * period_r {
self.delay_idx_r += period_r;
}
(l, r)
}
}
impl MidiReceiver for RoboTuna {
fn midi_input(&mut self, _model: &RoboTunaModelProcess, data: [u8; 3]) {
match data[0] {

View File

@ -1,131 +0,0 @@
use ringbuf::{Consumer, Producer};
use utils::buffers::*;
use utils::pitch::*;
use crate::BUFFER_LEN;
type SampleRate = u32;
pub struct ProcessChunk {
pub(crate) buffers: Buffers<BUFFER_LEN>,
pub(crate) sample_rate: SampleRate,
/// Midi note number to shift frequency to
pub(crate) note: Option<u8>,
/// If true, will listen to note
/// If false, will snap to closest note
pub(crate) manual: bool,
/// Extra frequency shifting to do
pub(crate) freq_gain: f32,
}
pub fn tuna(mut inputs: Consumer<ProcessChunk>, mut outputs: Producer<Buffers<BUFFER_LEN>>) {
// Keep track of last detected note, and use it in case of not detecting a new one
let mut prev_l_freq: Option<f32> = None;
let mut prev_r_freq: Option<f32> = None;
let mut detector_l = generate_pitch_detector(BUFFER_LEN);
let mut detector_r = generate_pitch_detector(BUFFER_LEN);
// Sample rates get overriden on first iteration, so we just set 48k
let mut shifter_l = generate_vocoder(48000);
let mut shifter_r = generate_vocoder(48000);
loop {
if let Some(ProcessChunk {
buffers: recording,
sample_rate,
note,
manual,
freq_gain,
}) = inputs.pop()
{
log::info!("got a buffer to process");
// If we're on manual mode, and we don't have a note, just pass through
if manual && note.is_none() {
let _ = outputs.push(recording);
continue;
}
// TODO It does weird stereo things
// Update sample rate
shifter_l.set_sample_rate(sample_rate as f64);
shifter_r.set_sample_rate(sample_rate as f64);
// Left
let l = recording.l;
// Try detecting note
let l = if let Some((actual, _clarity)) = pitch_detect(&mut detector_l, &l, sample_rate)
{
log::info!("L: detected actual pitch: {}", actual);
// If note is found, set it as previous, and pitch shift
prev_l_freq = Some(actual);
// If it's on manual mode, convert midi note to pitch
// If not, snap to closest frequency
let expected = if manual {
midi_note_to_pitch(note.expect("We wouldn't be here if note is None"))
} else {
closest_note_freq(actual)
};
// Perform pitch shift
// `expected / actual` is how much to shift the pitch
// If the actual pitch is 400, and expected is 800, we want to shift by 2
pitch_shift(&mut shifter_l, &l, freq_gain * expected / actual)
} else if let Some(actual) = prev_l_freq {
log::info!("L: reusing actual pitch: {}", actual);
let expected = if manual {
midi_note_to_pitch(note.expect("We wouldn't be here if note is None"))
} else {
closest_note_freq(actual)
};
pitch_shift(&mut shifter_l, &l, freq_gain * expected / actual)
} else {
log::info!("L: no actual pitch");
// If there's nothing, leave it as is
l
};
// Same thing for the right side
let r = recording.r;
let r = if let Some((actual, _clarity)) = pitch_detect(&mut detector_r, &r, sample_rate)
{
log::info!("R: detected actual pitch: {}", actual);
prev_r_freq = Some(actual);
let expected = if manual {
midi_note_to_pitch(note.expect("We wouldn't be here if note is None"))
} else {
closest_note_freq(actual)
};
pitch_shift(&mut shifter_r, &l, freq_gain * expected / actual)
} else if let Some(actual) = prev_r_freq {
log::info!("R: reusing actual pitch: {}", actual);
let expected = if manual {
midi_note_to_pitch(note.expect("We wouldn't be here if note is None"))
} else {
closest_note_freq(actual)
};
pitch_shift(&mut shifter_r, &l, freq_gain * expected / actual)
} else {
log::info!("R: no actual pitch");
r
};
let _ = outputs.push(Buffers::from(l, r));
log::info!("finished processing a buffer");
}
}
}

View File

@ -9,3 +9,5 @@ crate-type = ["cdylib"]
[dependencies]
baseplug = { git = "https://github.com/wrl/baseplug.git", rev = "9cec68f31cca9c0c7a1448379f75d92bbbc782a8" }
serde = "1.0.126"
utils = { path = "../utils" }

View File

@ -1,30 +0,0 @@
pub struct DelayLine<const LEN: usize> {
buffer: [f32; LEN],
index: usize,
}
impl<const LEN: usize> DelayLine<LEN> {
pub fn new() -> Self {
Self {
buffer: [0.0; LEN],
index: 0,
}
}
pub fn read_slice(&self, slice: &mut [f32]) {
// Copy values in order
for i in 0..LEN {
slice[i] = self.buffer[(self.index + i - LEN) % LEN];
}
}
pub fn write_and_advance(&mut self, value: f32) {
self.buffer[self.index] = value;
if self.index == LEN - 1 {
self.index = 0;
} else {
self.index += 1;
}
}
}

View File

@ -4,8 +4,7 @@
use baseplug::{Plugin, ProcessContext};
use serde::{Deserialize, Serialize};
mod delay;
use delay::DelayLine;
use utils::delay::*;
// If you change this remember to change the max on the model
const LEN: usize = 48000;

View File

@ -0,0 +1,15 @@
[package]
name = "transmute_pitch"
version = "0.1.0"
edition = "2018"
[lib]
crate-type = ["cdylib"]
[dependencies]
baseplug = { git = "https://github.com/wrl/baseplug.git", rev = "9cec68f31cca9c0c7a1448379f75d92bbbc782a8" }
serde = "1.0.126"
log = "0.4.14"
ringbuf = "0.2.5"
utils = { path = "../utils" }

View File

@ -0,0 +1,108 @@
#![allow(incomplete_features)]
#![feature(generic_associated_types)]
use baseplug::{event::Data, Event, Plugin, ProcessContext};
use serde::{Deserialize, Serialize};
use utils::pitch::*;
const BUFFER_LEN: usize = 2 << 9;
baseplug::model! {
#[derive(Debug, Serialize, Deserialize)]
struct TransmutePitchModel {
#[model(min = 0.0, max = 1.0)]
#[parameter(name = "passthrough")]
passthrough: f32,
}
}
impl Default for TransmutePitchModel {
fn default() -> Self {
Self { passthrough: 1.0 }
}
}
struct TransmutePitch {
detector_thread: pitch_detection::PitchDetectorThread<BUFFER_LEN>,
last_note: Option<u8>,
}
impl Plugin for TransmutePitch {
const NAME: &'static str = "transmute pitch";
const PRODUCT: &'static str = "transmute pitch";
const VENDOR: &'static str = "unnieversal";
const INPUT_CHANNELS: usize = 1;
const OUTPUT_CHANNELS: usize = 1;
type Model = TransmutePitchModel;
#[inline]
fn new(_sample_rate: f32, _model: &TransmutePitchModel) -> Self {
let detector_thread = pitch_detection::PitchDetectorThread::<BUFFER_LEN>::new();
Self {
detector_thread,
last_note: None,
}
}
#[inline]
fn process(&mut self, model: &TransmutePitchModelProcess, ctx: &mut ProcessContext<Self>) {
let input = &ctx.inputs[0].buffers;
let output = &mut ctx.outputs[0].buffers;
let enqueue_midi = &mut ctx.enqueue_event;
for i in 0..ctx.nframes {
output[0][i] = model.passthrough[i] * input[0][i];
output[1][i] = model.passthrough[i] * input[1][i];
// pass input to pitch detector
self.detector_thread
.write(input[0][i], 0.0, ctx.sample_rate as u32);
// Try to get a processed buffer from the processor thread
match self.detector_thread.try_get_pitch() {
Some((Some(pitch), _)) => {
let note = pitch_to_midi_note(pitch);
// If note changed
if self.last_note != Some(note) {
// Send note off for last note
if let Some(last_note) = self.last_note {
let note_off = Event::<TransmutePitch> {
frame: i,
data: Data::Midi([0x80, last_note, 0]),
};
enqueue_midi(note_off);
}
// Send note on for the new one
let note_on = Event::<TransmutePitch> {
frame: i,
data: Data::Midi([0x90, note, 64]),
};
enqueue_midi(note_on);
// Update note
self.last_note = Some(note);
}
}
Some((None, _)) => {
if let Some(last_note) = self.last_note {
let note_off = Event::<TransmutePitch> {
frame: i,
data: Data::Midi([0x80, last_note, 0]),
};
enqueue_midi(note_off);
}
self.last_note = None;
}
_ => {}
}
}
}
}
baseplug::vst2!(TransmutePitch, b"trpi");

View File

@ -10,3 +10,4 @@ log = "0.4.14"
log-panics = "2.0.0"
dirs = "3.0.2"
pvoc = { path = "../pvoc-rs" }
ringbuf = "0.2.5"

88
crates/utils/src/delay.rs Normal file
View File

@ -0,0 +1,88 @@
pub struct DelayLine<const LEN: usize> {
buffer: [f32; LEN],
index: usize,
}
impl<const LEN: usize> DelayLine<LEN> {
pub fn new() -> Self {
Self {
buffer: [0.0; LEN],
index: 0,
}
}
pub fn read_slice(&self, slice: &mut [f32]) {
// Copy values in order
for i in 0..LEN {
slice[i] = self.wrapped_index(self.index + i);
}
}
pub fn write_and_advance(&mut self, value: f32) {
self.buffer[self.index] = value;
if self.index == LEN - 1 {
self.index = 0;
} else {
self.index += 1;
}
}
/// Returns the sample at idx after taking modulo LEN
pub fn wrapped_index(&self, idx: usize) -> f32 {
self.buffer[idx % LEN]
}
/// Indexes the buffer but interpolates between the current and the next sample
pub fn floating_index(&self, val: f32) -> f32 {
let idx = val.trunc() as usize;
let frac = val.fract();
// TODO uhm idk what this should be, but we don't want an underflow so yeah,
let xm1 = if idx == 0 {
0.0
} else {
self.wrapped_index(idx - 1)
};
let x0 = self.wrapped_index(idx);
let x1 = self.wrapped_index(idx + 1);
let x2 = self.wrapped_index(idx + 2);
// linear interpolation
// return (1.0 - frac) * x0 + frac * x1;
crate::hermite(frac, xm1, x0, x1, x2)
}
/// Get a reference to the delay line's index.
pub fn idx(&self) -> &usize {
&self.index
}
/// Get a reference to the delay line's buffer.
pub fn buffer(&self) -> &[f32; LEN] {
&self.buffer
}
}
pub struct DelayLines<const LEN: usize> {
pub l: DelayLine<LEN>,
pub r: DelayLine<LEN>,
}
impl<const LEN: usize> DelayLines<LEN> {
pub fn new() -> Self {
Self {
l: DelayLine::<LEN>::new(),
r: DelayLine::<LEN>::new(),
}
}
pub fn read_slices(&self, l: &mut [f32], r: &mut [f32]) {
self.l.read_slice(l);
self.r.read_slice(r);
}
pub fn write_and_advance(&mut self, l: f32, r: f32) {
self.l.write_and_advance(l);
self.r.write_and_advance(r);
}
}

View File

@ -1,4 +1,15 @@
pub mod buffers;
pub mod delay;
pub mod logs;
pub mod pitch;
pub mod threeband;
pub fn hermite(frac: f32, xm1: f32, x0: f32, x1: f32, x2: f32) -> f32 {
let c = (x1 - xm1) * 0.5;
let v = x0 - x1;
let w = c + v;
let a = w + v + (x2 - x0) * 0.5;
let b_neg = w + a;
(((a * frac) - b_neg) * frac + c) * frac + x0
}

View File

@ -1,30 +1,4 @@
use pitch_detection::detector::yin::YINDetector;
use pitch_detection::detector::PitchDetector;
pub fn generate_pitch_detector(size: usize) -> impl PitchDetector<f32> {
let padding = size / 2;
YINDetector::new(size, padding)
}
/// Returns an option with (Frequency, Clarity)
pub fn pitch_detect(
detector: &mut dyn PitchDetector<f32>,
signal: &[f32],
sample_rate: u32,
) -> Option<(f32, f32)> {
const POWER_THRESHOLD: f32 = 0.15;
const CLARITY_THRESHOLD: f32 = 0.5;
let pitch = detector.get_pitch(
&signal,
sample_rate as usize,
POWER_THRESHOLD,
CLARITY_THRESHOLD,
);
pitch.map(|a| (a.frequency, a.clarity))
}
pub mod pitch_detection;
pub fn generate_vocoder(sample_rate: u32) -> PhaseVocoder {
PhaseVocoder::new(1, sample_rate as f64, 256, 4)

View File

@ -0,0 +1,144 @@
use pitch_detection::detector::yin::YINDetector;
use pitch_detection::detector::PitchDetector;
use crate::buffers::Buffers;
use ringbuf::{Consumer, Producer, RingBuffer};
pub fn generate_pitch_detector(size: usize) -> impl PitchDetector<f32> {
let padding = size / 2;
YINDetector::new(size, padding)
}
/// Returns an option with (Frequency, Clarity)
pub fn pitch_detect(
detector: &mut dyn PitchDetector<f32>,
signal: &[f32],
sample_rate: u32,
) -> Option<(f32, f32)> {
const POWER_THRESHOLD: f32 = 0.15;
const CLARITY_THRESHOLD: f32 = 0.5;
let pitch = detector.get_pitch(
&signal,
sample_rate as usize,
POWER_THRESHOLD,
CLARITY_THRESHOLD,
);
pitch.map(|a| (a.frequency, a.clarity))
}
pub struct DetectionInput<const LEN: usize> {
pub buffers: Buffers<LEN>,
pub sample_rate: u32,
}
pub struct DetectionOutput<const LEN: usize> {
pub buffers: Buffers<LEN>,
pub pitch_l: Option<f32>,
pub pitch_r: Option<f32>,
}
pub fn detect<const LEN: usize>(
mut inputs: Consumer<DetectionInput<LEN>>,
mut outputs: Producer<DetectionOutput<LEN>>,
) {
let mut detector_l = generate_pitch_detector(LEN);
let mut detector_r = generate_pitch_detector(LEN);
loop {
if let Some(DetectionInput::<LEN> {
buffers,
sample_rate,
}) = inputs.pop()
{
let pitch_l = pitch_detect(&mut detector_l, &buffers.l, sample_rate).map(|a| a.0);
let pitch_r = pitch_detect(&mut detector_r, &buffers.r, sample_rate).map(|a| a.0);
let _ = outputs.push(DetectionOutput {
buffers,
pitch_l,
pitch_r,
});
}
}
}
pub struct PitchDetectorThread<const LEN: usize> {
/// Current recording buffer
/// Input goes here
recording_buffer: Buffers<LEN>,
/// Ringbuf producer so we can send audio chunks to the processing thread
recordings: Producer<DetectionInput<LEN>>,
/// Ringbuf consumer so we can receive processed buffers from the processing threads
processed: Consumer<DetectionOutput<LEN>>,
/// Contains some empty buffers so we can reuse them instead of doing allocations
/// Buffers here are not actually empty, since we don't spend any time clearing them
/// But since they will be overwritten, this isn't an issue
empty_buffers: Vec<Buffers<LEN>>,
}
impl<const LEN: usize> PitchDetectorThread<LEN> {
pub fn new() -> Self {
let (recordings, recording_rx) = RingBuffer::<DetectionInput<LEN>>::new(30).split();
let (processed_tx, processed) = RingBuffer::<DetectionOutput<LEN>>::new(30).split();
// Spawn analysis thread
std::thread::spawn(move || {
detect(recording_rx, processed_tx);
});
// keep some empty buffer around so we can swap them
let mut empty_buffers = Vec::with_capacity(80);
empty_buffers.append(&mut vec![Buffers::new(); 30]);
Self {
recordings,
processed,
empty_buffers,
recording_buffer: Buffers::new(),
}
}
pub fn write(&mut self, l: f32, r: f32, sample_rate: u32) {
let full = self.recording_buffer.write_advance(l, r);
// If we fill the buffer, switch it with an empty one
if full {
// we have to loop here, cause when the daw renders audio it tries to do it faster than
// real time. if we don't loop and wait, the processing thread gets stuck with all of the buffers,
// and we run out of empty ones to switch to
// the loop-wait ensures that we don't panic when there isn't an empty buffer
loop {
// get the empty buffer from unused buffer list
if let Some(mut buf) = self.empty_buffers.pop() {
buf.reset();
// swap it with recording buffer
std::mem::swap(&mut buf, &mut self.recording_buffer);
buf.reset();
// pass it to the processor thread
let _ = self.recordings.push(DetectionInput::<LEN> {
buffers: buf,
sample_rate,
});
break;
}
std::thread::sleep(std::time::Duration::from_micros(10));
}
}
}
pub fn try_get_pitch(&mut self) -> Option<(Option<f32>, Option<f32>)> {
let DetectionOutput::<LEN> {
buffers,
pitch_l,
pitch_r,
} = self.processed.pop()?;
self.empty_buffers.push(buffers);
Some((pitch_l, pitch_r))
}
}