Wekinator

This is a Weka bird. Except it's been Wekinated, so it has scary red eyes.


The Wekinator: Software for using machine learning to build real-time interactive systems

by

Rebecca
Fiebrink
, Dan Trueman,
and Perry
Cook

In short, you haven’t heard about Wekinator because I was nose-deep in dissertation work until quite recently. The Wekinator is software I built during my recent PhD at Princeton, and it’s meant to enable composers and musicians to more easily use machine learning to create interactive systems. (Please see the website for more info: http://wekinator.cs.princeton.edu/)

The software is currently in a stable state, and it’s usable by anyone who wants to control ChucK, Processing, Unity, Ableton, or pretty much anything else that can be controlled by a stream of OSC messages. These audio/video/etc. systems can be controlled by gesture (Kinect, Arduino, USB game controllers, webcam, …), audio (e.g. using ChucK audio feature extractors, Max/MSP analyzer~, etc.), or anything else that can extract information about human actions in real-time and pass it to Wekinator via OSC.

The software will continue to evolve. There are some architectural changes which must happen soon, but if anyone is looking to help with development, there are a few areas where I see room for collaboration:
– UI (to keep this cross-platform, my plan is to stay in Java, using JavaFX + processing; that said, the UI needs an overhaul)
– Documentation (READMEs, FAQ, and in-app help)
– Examples (e.g., putting together a nice repository of code examples of ChucK, Max/MSP, Ableton, Processing, etc. being driven by Wekinator)
– Determining what is necessary for tighter integration into Max/MSP (e.g. using Jamoma framework), with working examples for our Max-loving friends

Additionally, I’m very interested in hearing suggestions, bug reports, and feature requests from people who are actively using Wekinator. From the beginning, the software has been driven by the ideas of people actually using it to make music, and it’s become a much better project as a result.

Lastly, if this project interests you at all, please consider joining the Wekinator mailing list at http://groups.google.com/group/wekinator-users?pli=1. That’s the best forum to share your ideas, bugs, questions, etc.

Best,
Rebecca

Click Tracker

Tracker is a program designed for composers, conductors and
instrumentalists working with modern music. Prepare a click track of any
score, no matter how complex it is.

This software runs under the open source program Pure Data,
and can be used either by conductors in concert, by musicians for
practice purposes, by composers while composing — or just to produce and
record a click track to be played back at a later point.

To prepare a score, just input the values in a plain text file format (.txt) according to the syntax explained in the tutorial online, or sent with the patch.

– new website on Google Code – http://code.google.com/p/clicktracker/
With online manual and installation instructions.

– new Facebook page –
https://www.facebook.com/pages/Click-Tracker/157542494299048

Jamoma 0.5.2 released


http://www.jamoma.org

The Jamoma team is pleased to announce the release of the Jamoma 0.5.2 implementation for Max/MSP. This second revision of Jamoma 0.5 has been in the works for the past 6 months and brings a lot of bug fixes and new features from the exchanges and efforts of all the Jamoma community members. Please visit the download section at http://jamoma.org to get a copy. Enjoy this new version!

Features include:

   * A large and peer-reviewed library of modules for audio and video processing, sensor integration, cue management, mapping, and exchange of data with other environments
   * Extensive set of abstractions that facilitates development and documentation of Max/MSP projects
   * Specialized sets of modules for work on spatial sound rendering, including support for advanced spatialization techniques such as Ambisonics, DBAP, VBAP, and ViMiC.
   * Modules for work on music-related movement analysis
   * Powerful underlying control structures that handle communication across modules
   * Strong emphasis on interoperability
   * Native OSC support, thus making it easy to access and manipulate processes via external devices and interfaces
   * Comprehensive documentation through maxhelp-files, reference pages and growing number of online tutorials
   * Easily extendable and customizable

Jamoma is an open-source project for the development of audio/video applications, plugins and Max/MSP-like environments. It offers many C++ frameworks for structured programming and is based on modular principles that allow the reuse of functionalities where all parameters remain customizable to specific needs.

Jamoma is in development for more than five years and is used for teaching and research within science and the arts. It has provided a performance framework for composition, audio/visual performances, theater and installation gallery settings. It has been also used for scientific research in the fields of psychoacoustics, music perception and cognition, machine learning, human computer interaction and medical research (see also http://redmine.jamoma.org/projects/jamoma/wiki/Research_Projects_Using_Jamoma).

Jamoma is distributed under BSD license and the sources can be freely downloaded at http://github.com/jamoma. Development is currently supported by BEK – Bergen Center for Electronic Arts, 74 Objects, Electrotap, GMEA – Centre National de Creation Musicale d’Albi-Tarn and University of Oslo. Further details can be found at http://www.jamoma.org/support.html