Audacity 3.2.0
Namespaces | Classes | Enumerations | Functions | Variables
MIR Namespace Reference

Namespaces

namespace  anonymous_namespace{GetMeterUsingTatumQuantizationFit.cpp}
 
namespace  anonymous_namespace{MirDsp.cpp}
 
namespace  anonymous_namespace{MirUtils.cpp}
 
namespace  anonymous_namespace{MusicInformationRetrieval.cpp}
 
namespace  anonymous_namespace{MusicInformationRetrievalTests.cpp}
 
namespace  anonymous_namespace{StftFrameProvider.cpp}
 
namespace  anonymous_namespace{StftFrameProviderTests.cpp}
 
namespace  anonymous_namespace{TatumQuantizationFitBenchmarking.cpp}
 

Classes

class  AnalyzedAudioClip
 
class  DecimatingMirAudioReader
 Our MIR operations do not need the full 44.1 or 48kHz resolution typical of audio files. It may change in the future, if we start looking at chromagrams for example, but for now even a certain amount of aliasing isn't an issue. In fact, for onset detection, it may even be beneficial, since it preserves a trace of the highest frequency components by folding them down below the nyquist. Thus we can decimate the audio signal to a certain extent. This is fast and easy to implement, meanwhile reducing dramatically the amount of data and operations. More...
 
class  EmptyMirAudioReader
 
class  FakeAnalyzedAudioClip
 
class  FakeProjectInterface
 
struct  LoopClassifierSettings
 
class  MirAudioReader
 
struct  MusicalMeter
 
struct  OctaveError
 
struct  OnsetQuantization
 
class  ProjectInterface
 
struct  ProjectSyncInfo
 
struct  ProjectSyncInfoInput
 
struct  QuantizationFitDebugOutput
 
struct  RocInfo
 
class  SquareWaveMirAudioReader
 
class  StftFrameProvider
 
class  WavMirAudioReader
 

Enumerations

enum class  FalsePositiveTolerance { Strict , Lenient }
 
enum class  TimeSignature {
  TwoTwo , FourFour , ThreeFour , SixEight ,
  _count
}
 
enum class  TempoObtainedFrom { Header , Title , Signal }
 How the tempo was obtained: More...
 

Functions

std::optional< MusicalMeterGetMeterUsingTatumQuantizationFit (const MirAudioReader &audio, FalsePositiveTolerance tolerance, const std::function< void(double)> &progressCallback, QuantizationFitDebugOutput *debugOutput)
 Get the BPM of the given audio file, using the Tatum Quantization Fit method. More...
 
std::vector< float > GetNormalizedCircularAutocorr (const std::vector< float > &x)
 Get the normalized, circular auto-correlation for a signal x whose length already is a power of two. Since the output is symmetric, only the left-hand side is returned, i.e., of size N/2 + 1, where N is the power of two the input was upsampled to. More...
 
std::vector< float > GetOnsetDetectionFunction (const MirAudioReader &audio, const std::function< void(double)> &progressCallback, QuantizationFitDebugOutput *debugOutput)
 
int GetNumerator (TimeSignature ts)
 
int GetDenominator (TimeSignature ts)
 
std::vector< int > GetPossibleBarDivisors (int lower, int upper)
 Function to generate numbers whose prime factorization contains only twos or threes. More...
 
std::vector< int > GetPeakIndices (const std::vector< float > &x)
 
std::vector< float > GetNormalizedHann (int size)
 
constexpr auto IsPowOfTwo (int x)
 
std::optional< ProjectSyncInfoGetProjectSyncInfo (const ProjectSyncInfoInput &in)
 
std::optional< double > GetBpmFromFilename (const std::string &filename)
 
std::optional< MusicalMeterGetMusicalMeterFromSignal (const MirAudioReader &audio, FalsePositiveTolerance tolerance, const std::function< void(double)> &progressCallback, QuantizationFitDebugOutput *debugOutput)
 
void SynchronizeProject (const std::vector< std::shared_ptr< AnalyzedAudioClip > > &clips, ProjectInterface &project, bool projectWasEmpty)
 
void ProgressBar (int width, int percent)
 
OctaveError GetOctaveError (double expected, double actual)
 Gets the tempo detection octave error, as defined in section 5. of Schreiber, H., Urbano, J. and Müller, M., 2020. Music Tempo Estimation: Are We Done Yet?. Transactions of the International Society for Music Information Retrieval, 3(1), p.111–125. DOI: https://doi.org/10.5334/tismir.43 In short, with an example: two bars of a fast 3/4 can in some cases be interpreted as one bar of 6/8. However, there are 6 beats in the former, against 2 in the latter, leading to an "octave error" of 3. In that case, the returned factor would be 3, and the remainder, log2(3 * actual / expected) More...
 
template<typename Result >
RocInfo GetRocInfo (std::vector< Result > results, double allowedFalsePositiveRate=0.)
 
template<typename T >
void PrintPythonVector (std::ofstream &ofs, const std::vector< T > &v, const char *name)
 
template<int bufferSize = 1024>
float GetChecksum (const MirAudioReader &source)
 
 TEST_CASE ("GetBpmFromFilename")
 
 TEST_CASE ("GetProjectSyncInfo")
 
 TEST_CASE ("SynchronizeProject")
 
 TEST_CASE ("StftFrameProvider")
 
 TEST_CASE ("GetRocInfo")
 
 TEST_CASE ("GetChecksum")
 
auto ToString (const std::optional< TimeSignature > &ts)
 
 TEST_CASE ("TatumQuantizationFitBenchmarking")
 
 TEST_CASE ("TatumQuantizationFitVisualization")
 

Variables

static const std::unordered_map< FalsePositiveTolerance, LoopClassifierSettingsloopClassifierSettings
 
static constexpr auto runLocally = false
 

Detailed Description


Audacity: A Digital Audio Editor

DecimatingMirAudioReader.cpp

Matthieu Hodgkinson


Audacity: A Digital Audio Editor

DecimatingMirAudioReader.h

Matthieu Hodgkinson


Audacity: A Digital Audio Editor

GetMeterUsingTatumQuantizationFit.cpp

Matthieu Hodgkinson


Audacity: A Digital Audio Editor

GetMeterUsingTatumQuantizationFit.h

Matthieu Hodgkinson

A method to classify audio recordings in loops and non-loops, with a confidence score, together with a BPM estimate.

The method evaluates the assumption that the given audio is a loop. Based on this assumption, and finite possible tempi and time signatures, a set of hypotheses is tested. For each hypothesis, a tatum* quantization is tried, returning an average of the normalized distance between Onset Detection Function (ODF) peaks and the closest tatum, weighted by the ODF peak values. This yields a single scalar that strongly correlates with the fact that the audio is a loop or not, and that we use for loop/non-loop classification.

Besides this score, the classification stage also yields the most likely tatum rate, which still needs disambiguation to find the beat rate. The autocorrelation of the ODF is taken, and, for each bar division explaining the tatum rate, is comb-filtered. The energy of the comb-filtering together with the BPM likelihood are combined together, and the BPM with largest score is returned.

This approach is in some aspects like existing tempo detection methods (e.g. Percival, Graham & Tzanetakis, George (2014), implemented in the Essentia framework at https://essentia.upf.edu/), insofar as it first derives an ODF and then somehow correlates it with expected rhythmic patterns. However, the quantization distance, at the core of the method, is not known by the author to be used in other methods. Also, once the ODF is taken, the loop assumption lends itself to a single analysis of the entire ODF, rather than performing mid-term analyses which are then combined together. Finally, albeit restricting the use of application, the loop assumption reduces the number of tried hypotheses, reducing the risk of non-musical recordings to be detected as musical by sheer luck. This increased robustness of the algorithm against false positives is quintessential for Audacity, where non-music users should not be bothered by wrong detections. The loop assumption is nevertheless not fundamental, and the algorithm could be implemented without it, at the cost of a higher risk of false positives.

Evaluation and benchmarking code can be found in TatumQuantizationFitBenchmarking.cpp. This code takes a tolerable false-positive rate, and outputs the corresponding loop/non-loop threshold. It also returns the Octave Error accuracy measure, as introduced in "Schreiber, H., et al. (2020). Music Tempo Estimation: Are We Done Yet?".

A tatum is the smallest rhythmic unit in a musical piece. Quoting from https://en.wikipedia.org/wiki/Tatum_(music): "The term was coined by Jeff Bilmes (...) and is named after the influential jazz pianist Art Tatum, "whose tatum was faster than all others""


Audacity: A Digital Audio Editor

MirDsp.cpp

Matthieu Hodgkinson


Audacity: A Digital Audio Editor

MirDsp.h

Matthieu Hodgkinson

DSP utilities used by the Music Information Retrieval code. These may migrate to lib-math if needed elsewhere.


Audacity: A Digital Audio Editor

MirProjectInterface.h

Matthieu Hodgkinson


Audacity: A Digital Audio Editor

MirTypes.h

Matthieu Hodgkinson


Audacity: A Digital Audio Editor

MirUtils.cpp

Matthieu Hodgkinson


Audacity: A Digital Audio Editor

MirUtils.h

Matthieu Hodgkinson


Audacity: A Digital Audio Editor

MusicInformationRetrieval.cpp

Matthieu Hodgkinson


Audacity: A Digital Audio Editor

MusicInformationRetrieval.h

Matthieu Hodgkinson


Audacity: A Digital Audio Editor

StftFrameProvider.cpp

Matthieu Hodgkinson


Audacity: A Digital Audio Editor

StftFrameProvider.h

Matthieu Hodgkinson


Audacity: A Digital Audio Editor

MirFakes.h

Matthieu Hodgkinson


Audacity: A Digital Audio Editor

MirTestUtils.h

Matthieu Hodgkinson


Audacity: A Digital Audio Editor

StftFrameProviderTests.cpp

Matthieu Hodgkinson


Audacity: A Digital Audio Editor

WaveMirAudioReader.cpp

Matthieu Hodgkinson


Audacity: A Digital Audio Editor

WaveMirAudioReader.h

Matthieu Hodgkinson

Enumeration Type Documentation

◆ FalsePositiveTolerance

enum class MIR::FalsePositiveTolerance
strong
Enumerator
Strict 
Lenient 

Definition at line 24 of file MirTypes.h.

◆ TempoObtainedFrom

enum class MIR::TempoObtainedFrom
strong

How the tempo was obtained:

  • looking for RIFF and ACID metadata in a WAV file's header,
  • looking for a tempo in the title of the file,
  • analyzing the signal.
Enumerator
Header 
Title 
Signal 

Definition at line 59 of file MirTypes.h.

60{
61 Header,
62 Title,
63 Signal,
64};
A string pair, representing HTTP header.

◆ TimeSignature

enum class MIR::TimeSignature
strong
Enumerator
TwoTwo 
FourFour 
ThreeFour 
SixEight 
_count 

Definition at line 30 of file MirTypes.h.

Function Documentation

◆ GetBpmFromFilename()

MUSIC_INFORMATION_RETRIEVAL_API std::optional< double > MIR::GetBpmFromFilename ( const std::string &  filename)

Definition at line 107 of file MusicInformationRetrieval.cpp.

108{
109 // regex matching a forward or backward slash:
110
111 // Regex: <(anything + (directory) separator) or nothing> <2 or 3 digits>
112 // <optional separator> <bpm (case-insensitive)> <separator or nothing>
113 const std::regex bpmRegex {
114 R"((?:.*(?:_|-|\s|\.|/|\\))?(\d+)(?:_|-|\s|\.)?bpm(?:(?:_|-|\s|\.).*)?)",
115 std::regex::icase
116 };
117 std::smatch matches;
118 if (std::regex_match(filename, matches, bpmRegex))
119 try
120 {
121 const auto value = std::stoi(matches[1]);
122 return 30 <= value && value <= 300 ? std::optional<double> { value } :
123 std::nullopt;
124 }
125 catch (const std::invalid_argument& e)
126 {
127 assert(false);
128 }
129 return {};
130}

Referenced by GetProjectSyncInfo(), and TEST_CASE().

Here is the caller graph for this function:

◆ GetChecksum()

template<int bufferSize = 1024>
float MIR::GetChecksum ( const MirAudioReader source)

Definition at line 163 of file MirTestUtils.h.

164{
165 // Sum samples to checksum.
166 float checksum = 0.f;
167 long long start = 0;
168 std::array<float, bufferSize> buffer;
169 while (true)
170 {
171 const auto numSamples =
172 std::min<long long>(bufferSize, source.GetNumSamples() - start);
173 if (numSamples == 0)
174 break;
175 source.ReadFloats(buffer.data(), start, numSamples);
176 checksum +=
177 std::accumulate(buffer.begin(), buffer.begin() + numSamples, 0.f);
178 start += numSamples;
179 }
180 return checksum;
181}
virtual void ReadFloats(float *buffer, long long where, size_t numFrames) const =0
virtual long long GetNumSamples() const =0

References MIR::MirAudioReader::GetNumSamples(), and MIR::MirAudioReader::ReadFloats().

Referenced by TEST_CASE().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ GetDenominator()

int MIR::GetDenominator ( TimeSignature  ts)
inline

Definition at line 46 of file MirTypes.h.

47{
48 constexpr std::array<int, static_cast<int>(TimeSignature::_count)>
49 denominators = { 2, 4, 4, 8 };
50 return denominators[static_cast<int>(ts)];
51}

References _count.

Referenced by AudacityMirProject::ReconfigureMusicGrid().

Here is the caller graph for this function:

◆ GetMeterUsingTatumQuantizationFit()

std::optional< MusicalMeter > MIR::GetMeterUsingTatumQuantizationFit ( const MirAudioReader audio,
FalsePositiveTolerance  tolerance,
const std::function< void(double)> &  progressCallback,
QuantizationFitDebugOutput debugOutput 
)

Get the BPM of the given audio file, using the Tatum Quantization Fit method.

Definition at line 392 of file GetMeterUsingTatumQuantizationFit.cpp.

396{
397 const auto odf =
398 GetOnsetDetectionFunction(audio, progressCallback, debugOutput);
399 const auto odfSr =
400 1. * audio.GetSampleRate() * odf.size() / audio.GetNumSamples();
401 const auto audioFileDuration =
402 1. * audio.GetNumSamples() / audio.GetSampleRate();
403
404 const auto peakIndices = GetPeakIndices(odf);
405 if (debugOutput)
406 {
407 debugOutput->audioFileDuration = audioFileDuration;
408 debugOutput->odfSr = odfSr;
409 debugOutput->odfPeakIndices = peakIndices;
410 }
411
412 const auto peakValues = ([&]() {
413 std::vector<float> peakValues(peakIndices.size());
414 std::transform(
415 peakIndices.begin(), peakIndices.end(), peakValues.begin(),
416 [&](int i) { return odf[i]; });
417 return peakValues;
418 })();
419
420 if (IsSingleEvent(peakIndices, peakValues))
421 return {};
422
423 const auto possibleDivs = GetPossibleDivHierarchies(audioFileDuration);
424 if (possibleDivs.empty())
425 // The file is probably too short to be a loop.
426 return {};
427
428 const auto possibleNumTatums = [&]() {
429 std::vector<int> possibleNumTatums(possibleDivs.size());
430 std::transform(
431 possibleDivs.begin(), possibleDivs.end(), possibleNumTatums.begin(),
432 [&](const auto& entry) { return entry.first; });
433 return possibleNumTatums;
434 }();
435
436 const auto experiment = RunQuantizationExperiment(
437 odf, peakIndices, peakValues, possibleNumTatums, debugOutput);
438
439 const auto winnerMeter = GetMostLikelyMeterFromQuantizationExperiment(
440 odf, experiment.numDivisions, possibleDivs.at(experiment.numDivisions),
441 audioFileDuration, debugOutput);
442
443 const auto score = 1 - experiment.error;
444
445 if (debugOutput)
446 {
447 debugOutput->tatumQuantization = experiment;
448 debugOutput->bpm = winnerMeter.bpm;
449 debugOutput->timeSignature = winnerMeter.timeSignature;
450 debugOutput->odf = odf;
451 debugOutput->odfSr = odfSr;
452 debugOutput->audioFileDuration = audioFileDuration;
453 debugOutput->score = score;
454 }
455
456 return score < loopClassifierSettings.at(tolerance).threshold ?
457 std::optional<MusicalMeter> {} :
458 winnerMeter;
459}
static ProjectFileIORegistry::AttributeWriterEntry entry
MockedAudio audio
bool IsSingleEvent(const std::vector< int > &peakIndices, const std::vector< float > &peakValues)
MusicalMeter GetMostLikelyMeterFromQuantizationExperiment(const std::vector< float > &odf, int numTatums, std::vector< BarDivision > possibleBarDivisions, double audioFileDuration, QuantizationFitDebugOutput *debugOutput)
OnsetQuantization RunQuantizationExperiment(const std::vector< float > &odf, const std::vector< int > &peakIndices, const std::vector< float > &peakValues, const std::vector< int > &possibleNumTatums, QuantizationFitDebugOutput *debugOutput)
std::vector< float > GetOnsetDetectionFunction(const MirAudioReader &audio, const std::function< void(double)> &progressCallback, QuantizationFitDebugOutput *debugOutput)
Definition: MirDsp.cpp:109
static const std::unordered_map< FalsePositiveTolerance, LoopClassifierSettings > loopClassifierSettings
std::vector< int > GetPeakIndices(const std::vector< float > &x)
Definition: MirUtils.cpp:67
std::vector< float > odf
Definition: MirTypes.h:145
OnsetQuantization tatumQuantization
Definition: MirTypes.h:138
std::optional< TimeSignature > timeSignature
Definition: MirTypes.h:140
std::vector< int > odfPeakIndices
Definition: MirTypes.h:148

References audio, MIR::QuantizationFitDebugOutput::audioFileDuration, MIR::QuantizationFitDebugOutput::bpm, entry, MIR::anonymous_namespace{GetMeterUsingTatumQuantizationFit.cpp}::GetMostLikelyMeterFromQuantizationExperiment(), GetOnsetDetectionFunction(), GetPeakIndices(), MIR::anonymous_namespace{GetMeterUsingTatumQuantizationFit.cpp}::GetPossibleDivHierarchies(), MIR::anonymous_namespace{GetMeterUsingTatumQuantizationFit.cpp}::IsSingleEvent(), loopClassifierSettings, MIR::QuantizationFitDebugOutput::odf, MIR::QuantizationFitDebugOutput::odfPeakIndices, MIR::QuantizationFitDebugOutput::odfSr, MIR::anonymous_namespace{GetMeterUsingTatumQuantizationFit.cpp}::RunQuantizationExperiment(), MIR::QuantizationFitDebugOutput::score, MIR::QuantizationFitDebugOutput::tatumQuantization, and MIR::QuantizationFitDebugOutput::timeSignature.

Referenced by GetMusicalMeterFromSignal().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ GetMusicalMeterFromSignal()

MUSIC_INFORMATION_RETRIEVAL_API std::optional< MusicalMeter > MIR::GetMusicalMeterFromSignal ( const MirAudioReader audio,
FalsePositiveTolerance  tolerance,
const std::function< void(double)> &  progressCallback,
QuantizationFitDebugOutput debugOutput 
)

Definition at line 132 of file MusicInformationRetrieval.cpp.

136{
137 if (audio.GetSampleRate() <= 0)
138 return {};
139 const auto duration = 1. * audio.GetNumSamples() / audio.GetSampleRate();
140 if (duration > 60)
141 // A file longer than 1 minute is most likely not a loop, and processing
142 // it would be costly.
143 return {};
144 DecimatingMirAudioReader decimatedAudio { audio };
146 decimatedAudio, tolerance, progressCallback, debugOutput);
147}
std::optional< MusicalMeter > GetMeterUsingTatumQuantizationFit(const MirAudioReader &audio, FalsePositiveTolerance tolerance, const std::function< void(double)> &progressCallback, QuantizationFitDebugOutput *debugOutput)
Get the BPM of the given audio file, using the Tatum Quantization Fit method.

References audio, and GetMeterUsingTatumQuantizationFit().

Referenced by GetProjectSyncInfo(), and TEST_CASE().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ GetNormalizedCircularAutocorr()

std::vector< float > MIR::GetNormalizedCircularAutocorr ( const std::vector< float > &  x)

Get the normalized, circular auto-correlation for a signal x whose length already is a power of two. Since the output is symmetric, only the left-hand side is returned, i.e., of size N/2 + 1, where N is the power of two the input was upsampled to.

Precondition
x.size() is a power of two.
Postcondition
returned vector has size x.size() / 2 + 1.

Definition at line 73 of file MirDsp.cpp.

74{
75 if (std::all_of(ux.begin(), ux.end(), [](float x) { return x == 0.f; }))
76 return ux;
77 const auto N = ux.size();
78 assert(IsPowOfTwo(N));
79 PffftSetupHolder setup { pffft_new_setup(N, PFFFT_REAL) };
80 PffftFloatVector x { ux.begin(), ux.end() };
81 PffftFloatVector work(N);
82 pffft_transform_ordered(
83 setup.get(), x.data(), x.data(), work.data(), PFFFT_FORWARD);
84
85 // Transform to a power spectrum, but preserving the layout expected by PFFFT
86 // in preparation for the inverse transform.
87 x[0] *= x[0];
88 x[1] *= x[1];
89 for (auto n = 2; n < N; n += 2)
90 {
91 x[n] = x[n] * x[n] + x[n + 1] * x[n + 1];
92 x[n + 1] = 0.f;
93 }
94
95 pffft_transform_ordered(
96 setup.get(), x.data(), x.data(), work.data(), PFFFT_BACKWARD);
97
98 // The second half of the circular autocorrelation is the mirror of the first
99 // half. We are economic and only keep the first half.
100 x.erase(x.begin() + N / 2 + 1, x.end());
101
102 const auto normalizer = 1 / x[0];
103 std::transform(x.begin(), x.end(), x.begin(), [normalizer](float x) {
104 return x * normalizer;
105 });
106 return { x.begin(), x.end() };
107}
std::unique_ptr< PFFFT_Setup, PffftSetupDeleter > PffftSetupHolder
constexpr auto IsPowOfTwo(int x)
Definition: MirUtils.h:28
A vector of floats guaranteeing alignment as demanded by pffft.

References IsPowOfTwo().

Referenced by MIR::anonymous_namespace{GetMeterUsingTatumQuantizationFit.cpp}::GetBestBarDivisionIndex().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ GetNormalizedHann()

std::vector< float > MIR::GetNormalizedHann ( int  size)

Definition at line 80 of file MirUtils.cpp.

81{
82 std::vector<float> window(size);
83 for (auto n = 0; n < size; ++n)
84 window[n] = .5 * (1 - std::cos(2 * pi * n / size));
85 const auto windowSum = std::accumulate(window.begin(), window.end(), 0.f);
86 std::transform(
87 window.begin(), window.end(), window.begin(),
88 [windowSum](float w) { return w / windowSum; });
89 return window;
90}

References MIR::anonymous_namespace{MirUtils.cpp}::pi, and size.

Referenced by MIR::anonymous_namespace{MirDsp.cpp}::GetMovingAverage().

Here is the caller graph for this function:

◆ GetNumerator()

int MIR::GetNumerator ( TimeSignature  ts)
inline

Definition at line 39 of file MirTypes.h.

40{
41 constexpr std::array<int, static_cast<int>(TimeSignature::_count)>
42 numerators = { 2, 4, 3, 6 };
43 return numerators[static_cast<int>(ts)];
44}

References _count.

Referenced by AudacityMirProject::ReconfigureMusicGrid().

Here is the caller graph for this function:

◆ GetOctaveError()

OctaveError MIR::GetOctaveError ( double  expected,
double  actual 
)

Gets the tempo detection octave error, as defined in section 5. of Schreiber, H., Urbano, J. and Müller, M., 2020. Music Tempo Estimation: Are We Done Yet?. Transactions of the International Society for Music Information Retrieval, 3(1), p.111–125. DOI: https://doi.org/10.5334/tismir.43 In short, with an example: two bars of a fast 3/4 can in some cases be interpreted as one bar of 6/8. However, there are 6 beats in the former, against 2 in the latter, leading to an "octave error" of 3. In that case, the returned factor would be 3, and the remainder, log2(3 * actual / expected)

Definition at line 39 of file MirTestUtils.cpp.

40{
41 constexpr std::array<double, 5> factors { 1., 2., .5, 3., 1. / 3 };
42 std::vector<OctaveError> octaveErrors;
43 std::transform(
44 factors.begin(), factors.end(), std::back_inserter(octaveErrors),
45 [&](double factor) {
46 const auto remainder = std::log2(factor * actual / expected);
47 return OctaveError { factor, remainder };
48 });
49 return *std::min_element(
50 octaveErrors.begin(), octaveErrors.end(),
51 [](const auto& a, const auto& b) {
52 return std::abs(a.remainder) < std::abs(b.remainder);
53 });
54}

Referenced by TEST_CASE().

Here is the caller graph for this function:

◆ GetOnsetDetectionFunction()

std::vector< float > MIR::GetOnsetDetectionFunction ( const MirAudioReader audio,
const std::function< void(double)> &  progressCallback,
QuantizationFitDebugOutput debugOutput 
)

Definition at line 109 of file MirDsp.cpp.

113{
114 StftFrameProvider frameProvider { audio };
115 const auto sampleRate = frameProvider.GetSampleRate();
116 const auto numFrames = frameProvider.GetNumFrames();
117 const auto frameSize = frameProvider.GetFftSize();
118 PffftFloatVector buffer(frameSize);
119 std::vector<float> odf;
120 odf.reserve(numFrames);
121 const auto powSpecSize = frameSize / 2 + 1;
122 PffftFloatVector powSpec(powSpecSize);
123 PffftFloatVector prevPowSpec(powSpecSize);
124 PffftFloatVector firstPowSpec;
125 std::fill(prevPowSpec.begin(), prevPowSpec.end(), 0.f);
126
127 PowerSpectrumGetter getPowerSpectrum { frameSize };
128
129 auto frameCounter = 0;
130 while (frameProvider.GetNextFrame(buffer))
131 {
132 getPowerSpectrum(buffer.aligned(), powSpec.aligned());
133
134 // Compress the frame as per section (6.5) in Müller, Meinard.
135 // Fundamentals of music processing: Audio, analysis, algorithms,
136 // applications. Vol. 5. Cham: Springer, 2015.
137 constexpr auto gamma = 100.f;
138 std::transform(
139 powSpec.begin(), powSpec.end(), powSpec.begin(),
140 [gamma](float x) { return FastLog2(1 + gamma * std::sqrt(x)); });
141
142 if (firstPowSpec.empty())
143 firstPowSpec = powSpec;
144 else
145 odf.push_back(GetNoveltyMeasure(prevPowSpec, powSpec));
146
147 if (debugOutput)
148 debugOutput->postProcessedStft.push_back(powSpec);
149
150 std::swap(prevPowSpec, powSpec);
151
152 if (progressCallback)
153 progressCallback(1. * ++frameCounter / numFrames);
154 }
155
156 // Close the loop.
157 odf.push_back(GetNoveltyMeasure(prevPowSpec, firstPowSpec));
158 assert(IsPowOfTwo(odf.size()));
159
160 const auto movingAverage =
161 GetMovingAverage(odf, frameProvider.GetFrameRate());
162
163 if (debugOutput)
164 {
165 debugOutput->rawOdf = odf;
166 debugOutput->movingAverage = movingAverage;
167 }
168
169 // Subtract moving average from ODF.
170 std::transform(
171 odf.begin(), odf.end(), movingAverage.begin(), odf.begin(),
172 [](float a, float b) { return std::max<float>(a - b, 0.f); });
173
174 return odf;
175}
Much faster that FFT.h's PowerSpectrum, at least in Short-Time Fourier Transform-like situations,...
float GetNoveltyMeasure(const PffftFloatVector &prevPowSpec, const PffftFloatVector &powSpec)
Definition: MirDsp.cpp:28
std::vector< float > GetMovingAverage(const std::vector< float > &x, double hopRate)
Definition: MirDsp.cpp:39
void swap(std::unique_ptr< Alg_seq > &a, std::unique_ptr< Alg_seq > &b)
Definition: NoteTrack.cpp:628
std::vector< float > rawOdf
Definition: MirTypes.h:143
std::vector< float > movingAverage
Definition: MirTypes.h:144
std::vector< PffftFloatVector > postProcessedStft
Definition: MirTypes.h:142

References PffftFloatVector::aligned(), audio, MIR::anonymous_namespace{MirDsp.cpp}::GetMovingAverage(), MIR::anonymous_namespace{MirDsp.cpp}::GetNoveltyMeasure(), IsPowOfTwo(), MIR::QuantizationFitDebugOutput::movingAverage, MIR::QuantizationFitDebugOutput::postProcessedStft, MIR::QuantizationFitDebugOutput::rawOdf, anonymous_namespace{ClipSegmentTest.cpp}::sampleRate, and anonymous_namespace{NoteTrack.cpp}::swap().

Referenced by GetMeterUsingTatumQuantizationFit().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ GetPeakIndices()

std::vector< int > MIR::GetPeakIndices ( const std::vector< float > &  x)

Definition at line 67 of file MirUtils.cpp.

68{
69 std::vector<int> peakIndices;
70 for (auto j = 0; j < x.size(); ++j)
71 {
72 const auto i = j == 0 ? x.size() - 1 : j - 1;
73 const auto k = j == x.size() - 1 ? 0 : j + 1;
74 if (x[i] < x[j] && x[j] > x[k])
75 peakIndices.push_back(j);
76 }
77 return peakIndices;
78}

Referenced by GetMeterUsingTatumQuantizationFit().

Here is the caller graph for this function:

◆ GetPossibleBarDivisors()

std::vector< int > MIR::GetPossibleBarDivisors ( int  lower,
int  upper 
)

Function to generate numbers whose prime factorization contains only twos or threes.

Definition at line 54 of file MirUtils.cpp.

55{
56 auto result = GetPowersOf2And3(lower, upper);
57 // Remove divisors that have more than two triplet levels. E.g. 3/4s are
58 // okay, 3/4s with swung ryhthms too, but beyond that it's probably very rare
59 // (e.g. swung 9/8 ??...)
60 result.erase(
61 std::remove_if(
62 result.begin(), result.end(), [](int n) { return n % 27 == 0; }),
63 result.end());
64 return result;
65}
std::vector< int > GetPowersOf2And3(int lower, int upper)
Definition: MirUtils.cpp:44

References MIR::anonymous_namespace{MirUtils.cpp}::GetPowersOf2And3().

Here is the call graph for this function:

◆ GetProjectSyncInfo()

std::optional< ProjectSyncInfo > MUSIC_INFORMATION_RETRIEVAL_API MIR::GetProjectSyncInfo ( const ProjectSyncInfoInput in)

Definition at line 49 of file MusicInformationRetrieval.cpp.

50{
51 if (in.tags.has_value() && in.tags->isOneShot)
52 // That's a one-shot file, we don't want to sync it.
53 return {};
54
55 std::optional<double> bpm;
56 std::optional<TimeSignature> timeSignature;
57 std::optional<TempoObtainedFrom> usedMethod;
58
59 if (in.tags.has_value() && in.tags->bpm.has_value() && *in.tags->bpm > 30.)
60 {
61 bpm = in.tags->bpm;
62 usedMethod = TempoObtainedFrom::Header;
63 }
64 else if (bpm = GetBpmFromFilename(in.filename))
65 usedMethod = TempoObtainedFrom::Title;
66 else if (
67 const auto meter = GetMusicalMeterFromSignal(
68 in.source,
69 in.viewIsBeatsAndMeasures ? FalsePositiveTolerance::Lenient :
70 FalsePositiveTolerance::Strict,
72 {
73 bpm = meter->bpm;
74 timeSignature = meter->timeSignature;
75 usedMethod = TempoObtainedFrom::Signal;
76 }
77 else
78 return {};
79
80 const auto qpm = *bpm * quarternotesPerBeat[static_cast<int>(
81 timeSignature.value_or(TimeSignature::FourFour))];
82
83 auto recommendedStretch = 1.0;
84 if (!in.projectWasEmpty)
85 // There already is content in this project, meaning that its tempo won't
86 // be changed. Change speed by some power of two to minimize stretching.
87 recommendedStretch =
88 std::pow(2., std::round(std::log2(in.projectTempo / qpm)));
89
90 auto excessDurationInQuarternotes = 0.;
91 auto numQuarters = in.source.GetDuration() * qpm / 60.;
92 const auto roundedNumQuarters = std::round(numQuarters);
93 const auto delta = numQuarters - roundedNumQuarters;
94 // If there is an excess less than a 32nd, we treat it as an edit error.
95 if (0 < delta && delta < 1. / 8)
96 excessDurationInQuarternotes = delta;
97
98 return ProjectSyncInfo {
99 qpm,
100 *usedMethod,
101 timeSignature,
102 recommendedStretch,
103 excessDurationInQuarternotes,
104 };
105}
double GetDuration() const
Definition: MirTypes.h:120
constexpr std::array< double, numTimeSignatures > quarternotesPerBeat
std::optional< MusicalMeter > GetMusicalMeterFromSignal(const MirAudioReader &audio, FalsePositiveTolerance tolerance, const std::function< void(double)> &progressCallback, QuantizationFitDebugOutput *debugOutput)
std::optional< double > GetBpmFromFilename(const std::string &filename)
fastfloat_really_inline void round(adjusted_mantissa &am, callback cb) noexcept
Definition: fast_float.h:2512
std::function< void(double progress)> progressCallback
std::optional< LibFileFormats::AcidizerTags > tags

References MIR::ProjectSyncInfoInput::filename, FourFour, GetBpmFromFilename(), MIR::MirAudioReader::GetDuration(), GetMusicalMeterFromSignal(), Header, Lenient, MIR::ProjectSyncInfoInput::progressCallback, MIR::ProjectSyncInfoInput::projectTempo, MIR::ProjectSyncInfoInput::projectWasEmpty, MIR::anonymous_namespace{MusicInformationRetrieval.cpp}::quarternotesPerBeat, fast_float::round(), Signal, MIR::ProjectSyncInfoInput::source, Strict, MIR::ProjectSyncInfoInput::tags, Title, and MIR::ProjectSyncInfoInput::viewIsBeatsAndMeasures.

Referenced by anonymous_namespace{ProjectFileManager.cpp}::RunTempoDetection(), and TEST_CASE().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ GetRocInfo()

template<typename Result >
RocInfo MIR::GetRocInfo ( std::vector< Result >  results,
double  allowedFalsePositiveRate = 0. 
)

The Receiver Operating Characteristic (ROC) curve is a plot of the true positive rate (TPR) against the false positive rate (FPR) for the different possible thresholds of a binary classifier. The area under the curve (AUC) is a measure of the classifier's performance. The greater the AUC, the better the classifier.

Template Parameters
Resulthas public members truth, boolean, and score, numeric
Parameters
resultstrue classifications and scores of some population
Precondition
at least one of results is really positive (truth is true), and at least one is really negative
0. <= allowedFalsePositiveRate && allowedFalsePositiveRate <= 1.

Definition at line 52 of file MirTestUtils.h.

53{
54 const auto truth = std::mem_fn(&Result::truth);
55 const auto falsity = std::not_fn(truth);
56
57 // There is at least one positive and one negative sample.
58 assert(any_of(results.begin(), results.end(), truth));
59 assert(any_of(results.begin(), results.end(), falsity));
60
61 assert(allowedFalsePositiveRate >= 0. && allowedFalsePositiveRate <= 1.);
62 allowedFalsePositiveRate = std::clamp(allowedFalsePositiveRate, 0., 1.);
63
64 // Sort the results by score, descending.
65 std::sort(results.begin(), results.end(), [](const auto& a, const auto& b) {
66 return a.score > b.score;
67 });
68
69 const auto size = results.size();
70 const auto numPositives = count_if(results.begin(), results.end(), truth);
71 const auto numNegatives = size - numPositives;
72
73 // Find true and false positive rates for various score thresholds.
74 // True positive and false positive counts are nondecreasing with i,
75 // therefore if false positive rate has increased at some i, true positive
76 // rate has not decreased.
77 std::vector<double> truePositiveRates;
78 truePositiveRates.reserve(size);
79 std::vector<double> falsePositiveRates;
80 falsePositiveRates.reserve(size);
81 size_t numTruePositives = 0;
82 size_t numFalsePositives = 0;
83 for (const auto& result : results)
84 {
85 if (result.truth)
86 ++numTruePositives;
87 else
88 ++numFalsePositives;
89 truePositiveRates.push_back(
90 static_cast<double>(numTruePositives) / numPositives);
91 falsePositiveRates.push_back(
92 static_cast<double>(numFalsePositives) / numNegatives);
93 }
94
95 // Now find the area under the non-decreasing curve with FPR as x-axis,
96 // TPR as y, and i as a parameter. (This curve is within a square with unit
97 // side.)
98 double auc = 0.;
99 for (size_t i = 0; i <= size; ++i)
100 {
101 const auto leftFpr = i == 0 ? 0. : falsePositiveRates[i - 1];
102 const auto rightFpr = i == size ? 1. : falsePositiveRates[i];
103 const auto leftTpr = i == 0 ? 0. : truePositiveRates[i - 1];
104 const auto rightTpr = i == size ? 1. : truePositiveRates[i];
105 const auto trapezoid = (rightTpr + leftTpr) * (rightFpr - leftFpr) / 2.;
106 assert(trapezoid >= 0); // See comments above
107 auc += trapezoid;
108 }
109
110 // Find the parameter at which the x coordinate exceeds the allowed FPR.
111 const auto it = std::upper_bound(
112 falsePositiveRates.begin(), falsePositiveRates.end(),
113 allowedFalsePositiveRate);
114
115 if (it == falsePositiveRates.end())
116 // All breakpoints satify the constraint. Return the least score.
117 return { auc, results.back().score };
118 else if (it == falsePositiveRates.begin())
119 // No breakpoint satisfies the constraint. Return the greatest score.
120 return { auc, results.front().score };
121
122 // For threshold, use the score halfway between the last breakpoint that
123 // satisfies the constraint and the first breakpoint that doesn't.
124 const auto index = it - falsePositiveRates.begin();
125 const auto threshold = (results[index - 1].score + results[index].score) / 2;
126
127 return { auc, threshold };
128}

References size.

Referenced by TEST_CASE().

Here is the caller graph for this function:

◆ IsPowOfTwo()

constexpr auto MIR::IsPowOfTwo ( int  x)
constexpr

Definition at line 28 of file MirUtils.h.

29{
30 return x > 0 && (x & (x - 1)) == 0;
31}

Referenced by MIR::anonymous_namespace{GetMeterUsingTatumQuantizationFit.cpp}::GetBestBarDivisionIndex(), GetNormalizedCircularAutocorr(), GetOnsetDetectionFunction(), MIR::StftFrameProvider::StftFrameProvider(), and TEST_CASE().

Here is the caller graph for this function:

◆ PrintPythonVector()

template<typename T >
void MIR::PrintPythonVector ( std::ofstream &  ofs,
const std::vector< T > &  v,
const char *  name 
)

Definition at line 133 of file MirTestUtils.h.

135{
136 ofs << name << " = [";
137 std::for_each(v.begin(), v.end(), [&](T x) { ofs << x << ","; });
138 ofs << "]\n";
139}
const TranslatableString name
Definition: Distortion.cpp:76

References name.

Referenced by TEST_CASE().

Here is the caller graph for this function:

◆ ProgressBar()

void MIR::ProgressBar ( int  width,
int  percent 
)

Definition at line 26 of file MirTestUtils.cpp.

27{
28 int progress = (width * percent) / 100;
29 std::cout << "[";
30 for (int i = 0; i < width; ++i)
31 if (i < progress)
32 std::cout << "=";
33 else
34 std::cout << " ";
35 std::cout << "] " << std::setw(3) << percent << "%\r";
36 std::cout.flush();
37}

Referenced by TEST_CASE().

Here is the caller graph for this function:

◆ SynchronizeProject()

MUSIC_INFORMATION_RETRIEVAL_API void MIR::SynchronizeProject ( const std::vector< std::shared_ptr< AnalyzedAudioClip > > &  clips,
ProjectInterface project,
bool  projectWasEmpty 
)

Definition at line 149 of file MusicInformationRetrieval.cpp.

152{
153 const auto isBeatsAndMeasures = project.ViewIsBeatsAndMeasures();
154
155 if (!projectWasEmpty && !isBeatsAndMeasures)
156 return;
157
158 const auto projectTempo =
159 !projectWasEmpty ? std::make_optional(project.GetTempo()) : std::nullopt;
160
161 if (!std::any_of(
162 clips.begin(), clips.end(),
163 [](const std::shared_ptr<AnalyzedAudioClip>& clip) {
164 return clip->GetSyncInfo().has_value();
165 }))
166 return;
167
168 Finally Do = [&] {
169 // Re-evaluate if we are in B&M view - we might have convinced the user to
170 // switch:
171 if (!project.ViewIsBeatsAndMeasures())
172 return;
173 std::for_each(
174 clips.begin(), clips.end(),
175 [&](const std::shared_ptr<AnalyzedAudioClip>& clip) {
176 clip->Synchronize();
177 });
178 project.OnClipsSynchronized();
179 };
180
181 if (!projectWasEmpty && isBeatsAndMeasures)
182 return;
183
184 const auto [loopIndices, oneshotIndices] = [&] {
185 std::vector<size_t> loopIndices;
186 std::vector<size_t> oneshotIndices;
187 for (size_t i = 0; i < clips.size(); ++i)
188 if (clips[i]->GetSyncInfo().has_value())
189 loopIndices.push_back(i);
190 else
191 oneshotIndices.push_back(i);
192 return std::make_pair(loopIndices, oneshotIndices);
193 }();
194
195 // Favor results based on reliability. We assume that header info is most
196 // reliable, followed by title, followed by DSP.
197 std::unordered_map<TempoObtainedFrom, size_t> indexMap;
198 std::for_each(loopIndices.begin(), loopIndices.end(), [&](size_t i) {
199 const auto usedMethod = clips[i]->GetSyncInfo()->usedMethod;
200 if (!indexMap.count(usedMethod))
201 indexMap[usedMethod] = i;
202 });
203
204 const auto chosenIndex = indexMap.count(TempoObtainedFrom::Header) ?
205 indexMap.at(TempoObtainedFrom::Header) :
206 indexMap.count(TempoObtainedFrom::Title) ?
207 indexMap.at(TempoObtainedFrom::Title) :
208 indexMap.at(TempoObtainedFrom::Signal);
209
210 const auto& chosenSyncInfo = *clips[chosenIndex]->GetSyncInfo();
211 const auto isSingleFileImport = clips.size() == 1;
212 if (!project.ShouldBeReconfigured(
213 chosenSyncInfo.rawAudioTempo, isSingleFileImport))
214 return;
215
216 project.ReconfigureMusicGrid(
217 chosenSyncInfo.rawAudioTempo, chosenSyncInfo.timeSignature);
218
219 // Reset tempo of one-shots to this new project tempo, so that they don't
220 // get stretched:
221 std::for_each(oneshotIndices.begin(), oneshotIndices.end(), [&](size_t i) {
222 clips[i]->SetRawAudioTempo(chosenSyncInfo.rawAudioTempo);
223 });
224}
const auto project
STL namespace.
"finally" as in The C++ Programming Language, 4th ed., p. 358 Useful for defining ad-hoc RAII actions...
Definition: MemoryX.h:175

References Header, project, Signal, and Title.

Referenced by ProjectFileManager::Import(), and TEST_CASE().

Here is the caller graph for this function:

◆ TEST_CASE() [1/8]

MIR::TEST_CASE ( "GetBpmFromFilename"  )

Definition at line 10 of file MusicInformationRetrievalTests.cpp.

11{
12 const std::vector<std::pair<std::string, std::optional<double>>> testCases {
13 { "120 BPM", 120 },
14
15 // there may be an extension
16 { "120 BPM.opus", 120 },
17 { "120 BPM", 120 },
18
19 // it may be preceeded by a path
20 { "C:/my\\path/to\\120 BPM", 120 },
21
22 // value must be between 30 and 300 inclusive
23 { "1 BPM", std::nullopt },
24 { "29 BPM", std::nullopt },
25 { "30 BPM", 30 },
26 { "300 BPM", 300 },
27 { "301 BPM", std::nullopt },
28 { "1000 BPM", std::nullopt },
29
30 // it may be preceeded by zeros
31 { "000120 BPM", 120 },
32
33 // there may be something before the value
34 { "anything 120 BPM", 120 },
35 // but then there must be a separator
36 { "anything120 BPM", std::nullopt },
37 // there may be something after the value
38 { "120 BPM anything", 120 },
39 // but then there must also be a separator
40 { "120 BPManything", std::nullopt },
41
42 // what separator is used doesn't matter
43 { "anything-120-BPM", 120 },
44 { "anything_120_BPM", 120 },
45 { "anything.120.BPM", 120 },
46
47 // but of course that can't be an illegal filename character
48 { "120/BPM", std::nullopt },
49 { "120\\BPM", std::nullopt },
50 { "120:BPM", std::nullopt },
51 { "120;BPM", std::nullopt },
52 { "120'BPM", std::nullopt },
53 // ... and so on.
54
55 // separators before and after don't have to match
56 { "anything_120-BPM", 120 },
57
58 // no separator between value and "bpm" is ok
59 { "anything.120BPM", 120 },
60
61 // a few real file names found out there
62 { "Cymatics - Cyclone Top Drum Loop 3 - 174 BPM", 174 },
63 { "Fantasie Impromptu Op. 66.mp3", std::nullopt },
64 };
65 std::vector<bool> success(testCases.size());
66 std::transform(
67 testCases.begin(), testCases.end(), success.begin(),
68 [](const auto& testCase) {
69 return GetBpmFromFilename(testCase.first) == testCase.second;
70 });
71 REQUIRE(
72 std::all_of(success.begin(), success.end(), [](bool b) { return b; }));
73}

◆ TEST_CASE() [2/8]

MIR::TEST_CASE ( "GetChecksum"  )

Definition at line 132 of file TatumQuantizationFitBenchmarking.cpp.

133{
134 constexpr auto bufferSize = 5;
135 const auto checksum = GetChecksum<bufferSize>(SquareWaveMirAudioReader {});
136 REQUIRE(checksum == 0.);
137}

◆ TEST_CASE() [3/8]

MIR::TEST_CASE ( "GetProjectSyncInfo"  )

Definition at line 83 of file MusicInformationRetrievalTests.cpp.

84{
85 SECTION("operator bool")
86 {
87 SECTION("returns false if ACID tag says one-shot")
88 {
89 auto input = arbitaryInput;
90 input.tags.emplace(AcidizerTags::OneShot {});
91 REQUIRE(!GetProjectSyncInfo(input).has_value());
92 }
93
94 SECTION("returns true if ACID tag says non-one-shot")
95 {
96 auto input = arbitaryInput;
97 input.tags.emplace(AcidizerTags::Loop { 120.0 });
98 REQUIRE(GetProjectSyncInfo(input).has_value());
99 }
100
101 SECTION("BPM is invalid")
102 {
103 SECTION("returns true if filename has BPM")
104 {
105 auto input = arbitaryInput;
106 input.filename = filename100bpm;
107 REQUIRE(GetProjectSyncInfo(input).has_value());
108 }
109
110 SECTION("returns false if filename has no BPM")
111 {
112 auto input = arbitaryInput;
113 input.filename = "filenameWithoutBpm";
114 REQUIRE(!GetProjectSyncInfo(input).has_value());
115 }
116 }
117 }
118
119 SECTION("GetProjectSyncInfo")
120 {
121 SECTION("prioritizes ACID tags over filename")
122 {
123 auto input = arbitaryInput;
124 input.filename = filename100bpm;
125 input.tags.emplace(AcidizerTags::Loop { 120. });
126 const auto info = GetProjectSyncInfo(input);
127 REQUIRE(info);
128 REQUIRE(info->rawAudioTempo == 120);
129 }
130
131 SECTION("falls back on filename if tag bpm is invalid")
132 {
133 auto input = arbitaryInput;
134 input.filename = filename100bpm;
135 input.tags.emplace(AcidizerTags::Loop { -1. });
136 const auto info = GetProjectSyncInfo(input);
137 REQUIRE(info);
138 REQUIRE(info->rawAudioTempo == 100);
139 }
140
141 SECTION("stretchMinimizingPowOfTwo is as expected")
142 {
143 auto input = arbitaryInput;
144 input.filename = filename100bpm;
145
146 input.projectTempo = 100.;
147 REQUIRE(GetProjectSyncInfo(input)->stretchMinimizingPowOfTwo == 1.);
148
149 // Project tempo twice as fast. Without compensation, the audio would
150 // be stretched to 0.5 its length. Not stretching it at all may still
151 // yield musically interesting results.
152 input.projectTempo = 200;
153 REQUIRE(GetProjectSyncInfo(input)->stretchMinimizingPowOfTwo == 2.);
154
155 // Same principle applies in the following:
156 input.projectTempo = 400;
157 REQUIRE(GetProjectSyncInfo(input)->stretchMinimizingPowOfTwo == 4.);
158 input.projectTempo = 50;
159 REQUIRE(GetProjectSyncInfo(input)->stretchMinimizingPowOfTwo == .5);
160 input.projectTempo = 25;
161 REQUIRE(GetProjectSyncInfo(input)->stretchMinimizingPowOfTwo == .25);
162
163 // Now testing edge cases:
164 input.projectTempo = 100 * std::pow(2, .51);
165 REQUIRE(GetProjectSyncInfo(input)->stretchMinimizingPowOfTwo == 2.);
166 input.projectTempo = 100 * std::pow(2, .49);
167 REQUIRE(GetProjectSyncInfo(input)->stretchMinimizingPowOfTwo == 1.);
168 input.projectTempo = 100 * std::pow(2, -.49);
169 REQUIRE(GetProjectSyncInfo(input)->stretchMinimizingPowOfTwo == 1.);
170 input.projectTempo = 100 * std::pow(2, -.51);
171 REQUIRE(GetProjectSyncInfo(input)->stretchMinimizingPowOfTwo == .5);
172 }
173 }
174}
std::optional< ProjectSyncInfo > GetProjectSyncInfo(const ProjectSyncInfoInput &in)

References MIR::anonymous_namespace{MusicInformationRetrievalTests.cpp}::arbitaryInput, MIR::ProjectSyncInfoInput::filename, MIR::anonymous_namespace{MusicInformationRetrievalTests.cpp}::filename100bpm, GetProjectSyncInfo(), and MIR::ProjectSyncInfoInput::tags.

Here is the call graph for this function:

◆ TEST_CASE() [4/8]

MIR::TEST_CASE ( "GetRocInfo"  )

Definition at line 73 of file TatumQuantizationFitBenchmarking.cpp.

74{
75 // We use the AUC as a measure of the classifier's performance. With a
76 // suitable data set, this helps us detect regressions, and guide fine-tuning
77 // of the algorithm. This test should help understand how it works and also
78 // make sure that we've implemented that metric correctly :)
79
80 struct Sample
81 {
82 bool truth;
83 double score;
84 };
85
86 using Samples = std::vector<Sample>;
87
88 struct Expected
89 {
90 double areaUnderCurve;
91 double threshold;
92 };
93
94 struct TestCase
95 {
96 const Samples samples;
97 const double allowedFalsePositiveRate;
98 const Expected expected;
99 };
100
101 const std::vector<TestCase> testCases {
102 // Classifier is upside down. We don't tolerate false positives. The
103 // returned threshold is then 100 will satisfy this, but the TPR will also
104 // be 0 ...
105 TestCase { Samples { { true, 100. }, { false, 200. } }, 0.,
106 Expected { 0., 200. } },
107
108 // Classifier is still upside down. We'll get true positives only if we
109 // accept an FPR of 1.
110 TestCase { Samples { { true, 100. }, { false, 200. } }, 1.,
111 Expected { 0., 100. } },
112
113 // Now we have a classifier that works. We don't accept false positives.
114 TestCase { Samples { { false, 100. }, { false, 150. }, { true, 200. } },
115 0., Expected { 1., 175. } },
116
117 // A random classifier, which should have an AUC of 0.75.
118 TestCase {
119 Samples { { false, 1. }, { true, 2. }, { false, 3. }, { true, 4. } },
120 0.5, Expected { .75, 1.5 } },
121 };
122
123 for (const auto& testCase : testCases)
124 {
125 const auto roc =
126 GetRocInfo(testCase.samples, testCase.allowedFalsePositiveRate);
127 REQUIRE(roc.areaUnderCurve == testCase.expected.areaUnderCurve);
128 REQUIRE(roc.threshold == testCase.expected.threshold);
129 }
130}
RocInfo GetRocInfo(std::vector< Result > results, double allowedFalsePositiveRate=0.)
Definition: MirTestUtils.h:52

References GetRocInfo().

Here is the call graph for this function:

◆ TEST_CASE() [5/8]

MIR::TEST_CASE ( "StftFrameProvider"  )

Definition at line 46 of file StftFrameProviderTests.cpp.

47{
48 SECTION("handles empty files")
49 {
50 StftFrameProvider sut { TestMirAudioReader { 0 } };
51 PffftFloatVector frame;
52 REQUIRE(!sut.GetNextFrame(frame));
53 }
54 SECTION("handles very short files")
55 {
56 StftFrameProvider sut { TestMirAudioReader { 1 } };
57 PffftFloatVector frame;
58 REQUIRE(!sut.GetNextFrame(frame));
59 }
60 SECTION("has power-of-two number of frames")
61 {
62 StftFrameProvider sut { TestMirAudioReader { 123456 } };
63 REQUIRE(IsPowOfTwo(sut.GetNumFrames()));
64 }
65 SECTION("respects MirAudioReader boundaries")
66 {
67 TestMirAudioReader reader { 123456 };
68 StftFrameProvider sut { reader };
69 PffftFloatVector frame;
70 while (sut.GetNextFrame(frame))
71 ;
72 }
73}

References IsPowOfTwo().

Here is the call graph for this function:

◆ TEST_CASE() [6/8]

MIR::TEST_CASE ( "SynchronizeProject"  )

Definition at line 176 of file MusicInformationRetrievalTests.cpp.

177{
178 constexpr auto initialProjectTempo = 100.;
179 FakeProjectInterface project { initialProjectTempo };
180
181 SECTION("single-file import")
182 {
183 constexpr FakeAnalyzedAudioClip::Params clipParams {
184 123., TempoObtainedFrom::Title
185 };
186
187 // Generate all possible situations, and in the sections filter for the
188 // conditions we want to check.
189 project.isBeatsAndMeasures = GENERATE(false, true);
190 project.shouldBeReconfigured = GENERATE(false, true);
191 const auto projectWasEmpty = GENERATE(false, true);
192 const auto clipsHaveTempo = GENERATE(false, true);
193
194 const std::vector<std::shared_ptr<AnalyzedAudioClip>> clips {
195 std::make_shared<FakeAnalyzedAudioClip>(
196 clipsHaveTempo ? std::make_optional(clipParams) : std::nullopt)
197 };
198
199 const auto projectWasReconfigured = [&](bool yes) {
200 const auto reconfigurationCheck = yes == project.wasReconfigured;
201 const auto projectTempoCheck =
202 project.projectTempo ==
203 (yes ? clipParams.tempo : initialProjectTempo);
204 REQUIRE(reconfigurationCheck);
205 REQUIRE(projectTempoCheck);
206 };
207
208 const auto clipsWereSynchronized = [&](bool yes) {
209 const auto check = yes == project.clipsWereSynchronized;
210 REQUIRE(check);
211 };
212
213 SECTION("nothing happens if")
214 {
215 SECTION("no clip has tempo")
216 if (!clipsHaveTempo)
217 {
218 SynchronizeProject(clips, project, projectWasEmpty);
219 projectWasReconfigured(false);
220 clipsWereSynchronized(false);
221 }
222 SECTION(
223 "user doesn't want reconfiguration and view is minutes and seconds")
224 if (!project.shouldBeReconfigured && !project.isBeatsAndMeasures)
225 {
226 SynchronizeProject(clips, project, projectWasEmpty);
227 projectWasReconfigured(false);
228 clipsWereSynchronized(false);
229 }
230 SECTION(
231 "user wants reconfiguration but view is minutes and seconds and project is not empty")
232 if (
233 project.shouldBeReconfigured && !project.isBeatsAndMeasures &&
234 !projectWasEmpty)
235 {
236 SynchronizeProject(clips, project, projectWasEmpty);
237 projectWasReconfigured(false);
238 clipsWereSynchronized(false);
239 }
240 }
241
242 SECTION(
243 "project gets reconfigured only if clips have tempo, user wants to and project is empty")
244 {
245 SynchronizeProject(clips, project, projectWasEmpty);
246 projectWasReconfigured(
247 clipsHaveTempo && project.shouldBeReconfigured && projectWasEmpty);
248 }
249
250 SECTION("project does not get reconfigured if")
251 {
252 SECTION("user doesn't want to")
253 if (!project.shouldBeReconfigured)
254 {
255 SynchronizeProject(clips, project, projectWasEmpty);
256 projectWasReconfigured(false);
257 }
258
259 SECTION("project was not empty")
260 if (!projectWasEmpty)
261 {
262 SynchronizeProject(clips, project, projectWasEmpty);
263 projectWasReconfigured(false);
264 }
265 }
266
267 SECTION("clips don't get synchronized if view is minutes and seconds and")
268 if (!project.isBeatsAndMeasures)
269 {
270 SECTION("user says no to reconfiguration")
271 if (!project.shouldBeReconfigured)
272 {
273 SynchronizeProject(clips, project, projectWasEmpty);
274 clipsWereSynchronized(false);
275 }
276 SECTION("project was not empty")
277 if (!projectWasEmpty)
278 {
279 SynchronizeProject(clips, project, projectWasEmpty);
280 clipsWereSynchronized(false);
281 }
282 }
283
284 SECTION("clips get synchronized if some clip has tempo and")
285 if (clipsHaveTempo)
286 {
287 SECTION(
288 "user doesn't want reconfiguration but view is beats and measures")
289 if (!project.shouldBeReconfigured && project.isBeatsAndMeasures)
290 {
291 SynchronizeProject(clips, project, projectWasEmpty);
292 clipsWereSynchronized(true);
293 }
294 SECTION(
295 "user wants reconfiguration, view is beats and measures and project is not empty")
296 if (
297 project.shouldBeReconfigured && project.isBeatsAndMeasures &&
298 !projectWasEmpty)
299 {
300 SynchronizeProject(clips, project, projectWasEmpty);
301 clipsWereSynchronized(true);
302 }
303 }
304 }
305
306 SECTION("multiple-file import")
307 {
308 project.shouldBeReconfigured = true;
309 constexpr auto projectWasEmpty = true;
310
311 SECTION(
312 "for clips of different tempi, precedence is header-based, then title-based, then signal-based")
313 {
315 {
316 std::make_shared<FakeAnalyzedAudioClip>(
317 FakeAnalyzedAudioClip::Params { 123.,
318 TempoObtainedFrom::Title }),
319 std::make_shared<FakeAnalyzedAudioClip>(
320 FakeAnalyzedAudioClip::Params { 456.,
321 TempoObtainedFrom::Header }),
322 std::make_shared<FakeAnalyzedAudioClip>(
323 FakeAnalyzedAudioClip::Params { 789.,
324 TempoObtainedFrom::Signal }),
325 },
326 project, projectWasEmpty);
327 REQUIRE(project.projectTempo == 456.);
328
330 {
331 std::make_shared<FakeAnalyzedAudioClip>(
332 FakeAnalyzedAudioClip::Params { 789.,
333 TempoObtainedFrom::Signal }),
334 std::make_shared<FakeAnalyzedAudioClip>(
335 FakeAnalyzedAudioClip::Params { 123.,
336 TempoObtainedFrom::Title }),
337 },
338 project, projectWasEmpty);
339 REQUIRE(project.projectTempo == 123.);
340
342 {
343 std::make_shared<FakeAnalyzedAudioClip>(
344 FakeAnalyzedAudioClip::Params { 789.,
345 TempoObtainedFrom::Signal }),
346 },
347 project, projectWasEmpty);
348 REQUIRE(project.projectTempo == 789.);
349 }
350
351 SECTION("raw audio tempo of one-shot clips is set to project tempo")
352 {
353 const auto oneShotClip =
354 std::make_shared<FakeAnalyzedAudioClip>(std::nullopt);
355 constexpr auto whicheverMethod = TempoObtainedFrom::Signal;
357 {
358 std::make_shared<FakeAnalyzedAudioClip>(
359 FakeAnalyzedAudioClip::Params { 123., whicheverMethod }),
360 oneShotClip,
361 },
362 project, projectWasEmpty);
363 REQUIRE(project.projectTempo == 123);
364 REQUIRE(oneShotClip->rawAudioTempo == 123);
365 }
366 }
367}
void SynchronizeProject(const std::vector< std::shared_ptr< AnalyzedAudioClip > > &clips, ProjectInterface &project, bool projectWasEmpty)

References Header, project, Signal, SynchronizeProject(), and Title.

Here is the call graph for this function:

◆ TEST_CASE() [7/8]

MIR::TEST_CASE ( "TatumQuantizationFitBenchmarking"  )

Definition at line 160 of file TatumQuantizationFitBenchmarking.cpp.

161{
162 // For this test to run, you will need to set `runLocally` to `true`, and
163 // you'll also need the benchmarking sound files. To get these, just open
164 // `download-benchmarking-dataset.html` in a browser. This will download a
165 // zip file that you'll need to extract and place in a `benchmarking-dataset`
166 // directory under this directory.
167
168 // Running this test will update
169 // `TatumQuantizationFitBenchmarkingOutput/summary.txt`. The summary contains
170 //
171 // 1. the AUC metric for regression-testing,
172 // 2. the strict- and lenient-mode thresholds,
173 // 3. the octave-error RMS (Schreiber, H., et al. (2020)), and
174 // 4. the hash of the audio files used.
175 //
176 // The AUC can only be used for comparison if the hash doesn't change. At the
177 // time of writing, the benchmarking can only conveniently be run on the
178 // author's machine (Windows), because the files used are not
179 // redistributable. Setting up a redistributable dataset that can be used by
180 // other developers is currently being worked on.
181
182 // We only observe the results for the most lenient classifier. The other,
183 // stricter classifier will yield the same results, only with fewer false
184 // positives.
185 if (!runLocally)
186 return;
187
188 constexpr auto tolerance = FalsePositiveTolerance::Lenient;
189 constexpr int progressBarWidth = 50;
190 const auto audioFiles = GetBenchmarkingAudioFiles();
191 std::stringstream sampleValueCsv;
192 sampleValueCsv
193 << "truth,score,tatumRate,bpm,ts,octaveFactor,octaveError,lag,filename\n";
194
195 float checksum = 0.f;
196 struct Sample
197 {
198 bool truth;
199 double score;
200 std::optional<OctaveError> octaveError;
201 };
202 std::vector<Sample> samples;
203 const auto numFiles = audioFiles.size();
204 auto count = 0;
205 std::chrono::milliseconds computationTime { 0 };
206 std::transform(
207 audioFiles.begin(), audioFiles.begin() + numFiles,
208 std::back_inserter(samples), [&](const std::string& wavFile) {
209 const WavMirAudioReader audio { wavFile };
210 checksum += GetChecksum(audio);
211 QuantizationFitDebugOutput debugOutput;
212 std::function<void(double)> progressCb;
213 const auto now = std::chrono::steady_clock::now();
214 GetMusicalMeterFromSignal(audio, tolerance, progressCb, &debugOutput);
215 computationTime +=
216 std::chrono::duration_cast<std::chrono::milliseconds>(
217 std::chrono::steady_clock::now() - now);
218 ProgressBar(progressBarWidth, 100 * count++ / numFiles);
219 const auto expected = GetBpmFromFilename(wavFile);
220 const auto truth = expected.has_value();
221 const std::optional<OctaveError> error =
222 truth && debugOutput.bpm > 0 ?
223 std::make_optional(GetOctaveError(*expected, debugOutput.bpm)) :
224 std::nullopt;
225 sampleValueCsv << (truth ? "true" : "false") << ","
226 << debugOutput.score << ","
227 << 60. * debugOutput.tatumQuantization.numDivisions /
228 debugOutput.audioFileDuration
229 << "," << debugOutput.bpm << ","
230 << ToString(debugOutput.timeSignature) << ","
231 << (error.has_value() ? error->factor : 0.) << ","
232 << (error.has_value() ? error->remainder : 0.) << ","
233 << debugOutput.tatumQuantization.lag << ","
234 << Pretty(wavFile) << "\n";
235 return Sample { truth, debugOutput.score, error };
236 });
237
238 {
239 std::ofstream timeMeasurementFile { "./timeMeasurement.txt" };
240 timeMeasurementFile << computationTime.count() << "ms\n";
241 }
242
243 // AUC of ROC curve. Tells how good our loop/not-loop clasifier is.
244 const auto rocInfo = GetRocInfo(
245 samples, loopClassifierSettings.at(tolerance).allowedFalsePositiveRate);
246
247 const auto strictThreshold =
249 samples, loopClassifierSettings.at(FalsePositiveTolerance::Strict)
250 .allowedFalsePositiveRate)
251 .threshold;
252
253 // Get RMS of octave errors. Tells how good the BPM estimation is.
254 const auto octaveErrors = std::accumulate(
255 samples.begin(), samples.end(), std::vector<double> {},
256 [&](std::vector<double> octaveErrors, const Sample& sample)
257 {
258 if (sample.octaveError.has_value())
259 octaveErrors.push_back(sample.octaveError->remainder);
260 return octaveErrors;
261 });
262 const auto octaveErrorStd = std::sqrt(
263 std::accumulate(
264 octaveErrors.begin(), octaveErrors.end(), 0.,
265 [&](double sum, double octaveError)
266 { return sum + octaveError * octaveError; }) /
267 octaveErrors.size());
268
269 constexpr auto previousAuc = 0.9312244897959182;
270 const auto classifierQualityHasChanged =
271 std::abs(rocInfo.areaUnderCurve - previousAuc) >= 0.01;
272
273 // Only update the summary if the figures have significantly changed.
274 if (classifierQualityHasChanged)
275 {
276 std::ofstream summaryFile {
277 std::string(CMAKE_CURRENT_SOURCE_DIR) +
278 "/TatumQuantizationFitBenchmarkingOutput/summary.txt"
279 };
280 summaryFile << std::setprecision(
281 std::numeric_limits<double>::digits10 + 1)
282 << "AUC: " << rocInfo.areaUnderCurve << "\n"
283 << "Strict Threshold (Minutes-and-Seconds): "
284 << strictThreshold << "\n"
285 << "Lenient Threshold (Beats-and-Measures): "
286 << rocInfo.threshold << "\n"
287 << "Octave error RMS: " << octaveErrorStd << "\n"
288 << "Audio file checksum: " << checksum << "\n";
289 // Write sampleValueCsv to a file.
290 std::ofstream sampleValueCsvFile {
291 std::string(CMAKE_CURRENT_SOURCE_DIR) +
292 "/TatumQuantizationFitBenchmarkingOutput/sampleValues.csv"
293 };
294 sampleValueCsvFile << sampleValueCsv.rdbuf();
295 }
296
297 // If this changed, then some non-refactoring code change happened. If
298 // `rocInfo.areaUnderCurve > previousAuc`, then there's probably no argument
299 // about the change. On the contrary, though, the change is either an
300 // inadvertent bug, and if it is deliberate, should be well justified.
301 REQUIRE(!classifierQualityHasChanged);
302}
OctaveError GetOctaveError(double expected, double actual)
Gets the tempo detection octave error, as defined in section 5. of Schreiber, H., Urbano,...
float GetChecksum(const MirAudioReader &source)
Definition: MirTestUtils.h:163
void ProgressBar(int width, int percent)
auto ToString(const std::optional< TimeSignature > &ts)
static constexpr auto runLocally
Definition: MirTestUtils.h:29
__finl float_x4 __vecc sqrt(const float_x4 &a)
const double threshold
Definition: MirTestUtils.h:34

References audio, MIR::QuantizationFitDebugOutput::audioFileDuration, MIR::QuantizationFitDebugOutput::bpm, MIR::anonymous_namespace{TatumQuantizationFitBenchmarking.cpp}::GetBenchmarkingAudioFiles(), GetBpmFromFilename(), GetChecksum(), GetMusicalMeterFromSignal(), GetOctaveError(), MIR::OnsetQuantization::lag, Lenient, MIR::OnsetQuantization::numDivisions, MIR::anonymous_namespace{TatumQuantizationFitBenchmarking.cpp}::Pretty(), ProgressBar(), runLocally, MIR::QuantizationFitDebugOutput::score, MIR::QuantizationFitDebugOutput::tatumQuantization, MIR::QuantizationFitDebugOutput::timeSignature, and ToString().

Here is the call graph for this function:

◆ TEST_CASE() [8/8]

MIR::TEST_CASE ( "TatumQuantizationFitVisualization"  )

Definition at line 11 of file TatumQuantizationFitVisualization.cpp.

12{
13 // This test produces python files containing data. Besides being useful for
14 // debugging, after you have run this, you can run
15 // `visualize_debug_output.py` to visualize the working of the algorithm, or
16 // `visualize_post-processed_STFT.py` to visualize the STFT used to produce
17 // the ODF.
18
19 if (!runLocally)
20 return;
21
22 const auto wavFile =
23 std::string { CMAKE_CURRENT_SOURCE_DIR } +
24 "/benchmarking-dataset/loops/Acoustic Loop Lucaz Collab 116BPM.wav.mp3";
25 const WavMirAudioReader audio { wavFile };
26 QuantizationFitDebugOutput debugOutput;
27 const auto result = GetMusicalMeterFromSignal(
28 audio, FalsePositiveTolerance::Lenient, nullptr, &debugOutput);
29
30 std::ofstream debug_output_module {
31 std::string(CMAKE_CURRENT_SOURCE_DIR) +
32 "/TatumQuantizationFitVisualization/debug_output.py"
33 };
34 debug_output_module << "wavFile = \"" << wavFile << "\"\n";
35 debug_output_module << "odfSr = " << debugOutput.odfSr << "\n";
36 debug_output_module << "audioFileDuration = "
37 << debugOutput.audioFileDuration << "\n";
38 debug_output_module << "score = " << debugOutput.score << "\n";
39 debug_output_module << "tatumRate = "
40 << 60. * debugOutput.tatumQuantization.numDivisions /
41 debugOutput.audioFileDuration
42 << "\n";
43 debug_output_module << "bpm = " << (result.has_value() ? result->bpm : 0.)
44 << "\n";
45 debug_output_module << "lag = " << debugOutput.tatumQuantization.lag << "\n";
46 debug_output_module << "odf_peak_indices = [";
47 std::for_each(
48 debugOutput.odfPeakIndices.begin(), debugOutput.odfPeakIndices.end(),
49 [&](int i) { debug_output_module << i << ","; });
50 debug_output_module << "]\n";
51 PrintPythonVector(debug_output_module, debugOutput.odf, "odf");
52 PrintPythonVector(debug_output_module, debugOutput.rawOdf, "rawOdf");
54 debug_output_module, debugOutput.movingAverage, "movingAverage");
56 debug_output_module, debugOutput.odfAutoCorr, "odfAutoCorr");
58 debug_output_module, debugOutput.odfAutoCorrPeakIndices,
59 "odfAutoCorrPeakIndices");
60
61 std::ofstream stft_log_module {
62 std::string { CMAKE_CURRENT_SOURCE_DIR } +
63 "/TatumQuantizationFitVisualization/stft_log.py"
64 };
65 stft_log_module << "wavFile = \"" << wavFile << "\"\n";
66 stft_log_module << "sampleRate = " << audio.GetSampleRate() << "\n";
67 stft_log_module << "frameRate = " << debugOutput.odfSr << "\n";
68 stft_log_module << "stft = [";
69 std::for_each(
70 debugOutput.postProcessedStft.begin(),
71 debugOutput.postProcessedStft.end(), [&](const auto& row) {
72 stft_log_module << "[";
73 std::for_each(row.begin(), row.end(), [&](float x) {
74 stft_log_module << x << ",";
75 });
76 stft_log_module << "],";
77 });
78 stft_log_module << "]\n";
79}
void PrintPythonVector(std::ofstream &ofs, const std::vector< T > &v, const char *name)
Definition: MirTestUtils.h:133

References audio, MIR::QuantizationFitDebugOutput::audioFileDuration, GetMusicalMeterFromSignal(), MIR::OnsetQuantization::lag, Lenient, MIR::QuantizationFitDebugOutput::movingAverage, MIR::OnsetQuantization::numDivisions, MIR::QuantizationFitDebugOutput::odf, MIR::QuantizationFitDebugOutput::odfAutoCorr, MIR::QuantizationFitDebugOutput::odfAutoCorrPeakIndices, MIR::QuantizationFitDebugOutput::odfPeakIndices, MIR::QuantizationFitDebugOutput::odfSr, MIR::QuantizationFitDebugOutput::postProcessedStft, PrintPythonVector(), MIR::QuantizationFitDebugOutput::rawOdf, runLocally, MIR::QuantizationFitDebugOutput::score, and MIR::QuantizationFitDebugOutput::tatumQuantization.

Here is the call graph for this function:

◆ ToString()

auto MIR::ToString ( const std::optional< TimeSignature > &  ts)

Definition at line 139 of file TatumQuantizationFitBenchmarking.cpp.

140{
141 if (ts.has_value())
142 switch (*ts)
143 {
144 case TimeSignature::TwoTwo:
145 return std::string("2/2");
146 case TimeSignature::FourFour:
147
148 return std::string("4/4");
149 case TimeSignature::ThreeFour:
150 return std::string("3/4");
151 case TimeSignature::SixEight:
152 return std::string("6/8");
153 default:
154 return std::string("none");
155 }
156 else
157 return std::string("none");
158}

References FourFour, SixEight, ThreeFour, and TwoTwo.

Referenced by audacity::sentry::Report::ReportImpl::Send(), and TEST_CASE().

Here is the caller graph for this function:

Variable Documentation

◆ loopClassifierSettings

const std::unordered_map<FalsePositiveTolerance, LoopClassifierSettings> MIR::loopClassifierSettings
static
Initial value:
{
{ FalsePositiveTolerance::Strict, { .04, 0.8679721717368254 } },
{ FalsePositiveTolerance::Lenient, { .1, 0.7129778875046098 } },
}

Tolerance-dependent thresholds, used internally by GetMusicalMeterFromSignal to decide whether to return a null or valid MusicalMeter. The value compared against these are scores which get higher as the signal is more likely to contain music content. They are obtained by running the TatumQuantizationFitBenchmarking test case. More information there.

Definition at line 49 of file MusicInformationRetrieval.h.

Referenced by GetMeterUsingTatumQuantizationFit().

◆ runLocally

constexpr auto MIR::runLocally = false
staticconstexpr

Definition at line 29 of file MirTestUtils.h.

Referenced by TEST_CASE().