Playing Field

Tech Preview

Whisper + audio synchronization:
screen-shot
In this tech preview we apply Whisper from OpenAI and the waveform visualization library from wavesurfer.js to transcribe audio into text and visualize the audio as waveforms. The waveforms may be dragged to try out syncronization.
This preview is designed to run on a single Linux server with limited memory and storage, so we are not allowing files larger than 20 MB. The code is available at: https://gitlab.origo.io/origosys/whisper-playing-field.
You can run it using Docker, check out the Docker file for instructions.
You can also just Git-clone the directory to any Apache or Nginx web server. Check out the Docker file for Apache configuration. If you are into Kubernetes, you can run a pre-built Docker image with this yaml file.
The back-end of this application consists of a single Python file, the front-end of a html file and a Javascript file. No frameworks, no shadow DOMs, no hydrating, no object stores, no Github actions, no queueing systems (though one is obviously needed if we want to do more than fool around); just bit of plain Javascript and a Python file.
Select a saved session:
Create a new session: