Feature Request] Accessibility on eMotion LV1 Classic: A proposal for blind users

Hi everyone and the Waves team,

My name is LuIs daniel Pérez am a blind audio engineer and a huge fan of your products.

I recently sent a detailed email to the support team regarding this topic, and I sincerely hope to receive a reply through that channel. However, I decided to post this here in the forum as well because I want the rest of the community to know that there are blind people who truly love the world of audio, recording, and mixing, and we want to be a part of this.

The situation with the eMotion LV1 Classic This console looks amazing, and as a user, I am very excited about the possibility of using it. However, since the interface is based almost 100% on a touchscreen, it is currently impossible for me to use it because I have no feedback on what I am touching.

A technical suggestion: The Windows Environment and NVDA I understand that the LV1 runs on an optimized version of Windows. This brings a technical possibility to mind: If the system allows for the execution of .exe files, perhaps it would be viable to integrate—or allow the execution of—screen readers like NVDA (which is open source and free).

I know there might be technical challenges, but if the interface could be adapted to “speak” to these types of readers, it would be a giant leap forward. The goal is that when touching the screen, a synthetic voice tells us what control is under our finger before we activate it (similar to how accessibility works on mobile phones today).

The Goal: 100% Usability For a large-format console like the LV1 Classic to be truly professional for us, accessibility cannot be partial. We need access to:

  • Faders, encoders, and buttons.

  • Internal plugin parameters.

  • And crucially: Configuration windows, patching, routing, sends, and auxes.

A positive reference Just as a constructive reference: many of us use Reaper because its openness allowed for the creation of total accessibility tools. If Waves explores a similar path, it would make many engineers who currently lack an accessible option for this level of console very happy.

My total commitment I want to let you know that I am entirely at your disposal if you need to test anything. English is not my native language, but with today’s translation tools, I will make every effort to communicate with you effectively, because it is vital for me that this console becomes accessible.

I hope you can take this suggestion into account. There are many of us who want to use this console; we just need you to open the door.

Thank you for reading.


1 Like

This would be a very cool thing to see! Though I’ll caution, the LV1 interface has clearly been created from scratch, rather than pre-built UI components, and so if they haven’t already designed in the accessibility information, it could be a massive project (I’ve faced similar projects myself, and in almost every case we’ve found ways to replace the entire GUI instead… I doubt that’s on the table here).

Plugins are likely to be in a better place, I’d expect. Plus they’re already parametrised (for automation), which means you can get to all the controls without needing the fancy GUIs already.

That said, having something like NVDA preinstalled in the LV1 Classic OS and routable to a cue channel would be pretty nice.

(For anyone thinking this would be completely infeasible, my understanding is that most users of this technology eventually learn to listen to narration at 4-6x speed or faster! At that rate, moving a touch across the screen to find what you want starts to make sense, as long as you don’t route it through main LR.)

1 Like

Wow, thank you so much for this reply, Steve! Knowing your technical background, your validation means a lot.

Regarding the GUI vs. Automation Data: I understand that the GUI is likely custom-built and lacks standard OS hooks. While looking at the data stream used for hardware controllers (like the FIT Controller) gives me hope for things like Faders, Mutes, and Plugin Parameters, my biggest concern—and the real barrier—lies in the “deep” system operations.

For a blind engineer to be truly independent, we need access to elements that typically don’t map to external fader banks, such as:

  • The Patching Grid and Routing: This is usually the hardest part to access in digital consoles, yet it’s the most vital for setup.

  • Floating Windows & Pop-ups: Configuration dialogs, scene management, and system alerts.

  • Setup Tabs: Sample rate settings, server management, and I/O config.

If the accessibility layer only covers mixing parameters (faders/plugins) but leaves out the Setup/Patching, the console remains unusable for a standalone engineer. We need that “Explore by Touch” feedback to cover the entirety of the screen elements, regardless of whether they are automatable parameters or static system menus.

The “Cue” Routing is Vital: Your point about routing NVDA to the Cue/Solo bus is absolutely critical (and brilliant). In a live sound scenario, the screen reader’s voice must be isolated to the engineer’s headphones (IEMs) and never bleed into the Main PA. This implementation would be non-negotiable for professional use.

Speed and Workflow: And yes! You are totally right about the speed. Experienced blind users listen to TTS at incredibly high speeds (it feels more like scanning data than listening to a conversation). “Explore by touch” on the screen combined with immediate auditory feedback in the Cue bus would allow for a mixing speed comparable to—or even faster than—visual mixing in some tasks.

Thanks again for chiming in. Fingers crossed that the dev team sees this potential!

That is a very valid point regarding the automation parameters.

Since the internal logic already has to expose every parameter ID for the automation engine and scene recall, the “map” is technically there. It is just a matter of building the bridge to the screen reader. Hopefully, they can prioritize this for the next major OS update.

Exactly! That’s the key point. Since the parameter IDs are already exposed for the Automation Engine and Scene Recall, the heavy lifting on the backend is already done. We don’t need a DSP rewrite.

We just need that existing ‘Map’ to be bridged to the Windows UI Automation (UIA) API. If the system can already tell a motorized fader via MCU protocol to move to ‘-5dB’, it can definitely tell a Screen Reader the same information. It’s just a matter of exposing that text feedback.

However, I have a concern regarding the scope of this accessibility:

My goal is 100% accessibility, not just mixing capabilities. While Automation IDs cover faders and plugins, what about the System Layer?

  • Routing & Patching: Often these are visual matrices. Are these exposed in the code in a way that can be navigated linearly?

  • System Configuration: Sample rate, buffer size, server assignment.

A technical question regarding the OS: I understand the LV1 Classic hardware runs on a streamlined version of Windows (Windows IoT/Embedded). Hypothetically, if I were to access the Admin Panel and run a Portable version of NVDA directly on the console: Would the LV1 interface expose any controls to the screen reader currently? Or is the GUI completely “painted” (using a proprietary graphics framework) without any hooks to the standard Windows Accessibility Tree?

I’m ready to beta test this ‘bridge’ whenever you decide to build it. The blind pro-audio community is waiting for a standard, and LV1 is perfectly positioned to be it.

I’d assume it’s identical to the PC version, or probably even the session editor, so you could test this easily without having to modify your Classic.

I would have to double check, but I’m pretty sure that version of Windows still includes accessibility features (it’s very difficult to legally release products without them these days). You can probably enable Narrator easily to check, but I still think testing on a PC would be an easier starting point.

1 Like

Quick update: I tested the Session Editor on macOS and unfortunately I’m getting zero usable screen reader feedback. Nothing is exposed in a way that can be navigated or announced — no controls, no values, no roles.

And to be clear, my goal isn’t “it reads some labels.” I need full operational accessibility, end-to-end:

  • When I press a physical button on the control surface, I must get immediate feedback telling me which button it is, what it just did, and what context/window I’m now in.

  • Same for on-screen interaction: I need to know what I’m touching/activating — button vs slider vs tab vs text field, on/off state, current value (e.g., dB, pan), and whether a control is focused/selected.

  • Also critical: pop-ups, dialogs, warnings, and system messages must be announced. Right now I’m not getting any of that — no text boxes, no floating windows, no dialog content, nothing.

At the moment, the only thing I can reliably access is the user manual. But reading a manual and actually operating the console in real-world sessions are two completely different things — there’s still a huge gap between “documentation” and “independent professional use.”

If the UI is currently a “painted” interface without accessibility hooks, then it confirms what we suspected: accessibility would require an explicit bridge layer that publishes the existing internal “map” (automation IDs / scene recall parameters + system functions) to the OS accessibility APIs, so screen readers can announce names/roles/values/states/actionsacross both the mixing layer and the setup/patching/system layer.


1 Like