Well, so the first thing is that no Electron isn't going anywhere. It's a hot mess both from the user experience and the developer side--what do you mean JS needs a compiler? But you're not going to beat it for cross platform ease if you also need to work on the web (you *can* beat it if it's just a desktop app, easily). And you can't beat it for getting some frontend dev who's never done desktop and be like "here's electron, have fun". it makes tons of sense from a business perspective, in other words.
Flutter is working on desktop accessibility actively, though who knows when that will be done. I don't have a single link to one place; it's kind of scattered. But they're forking the Chrome code as we speak. They seem to care. If that happens on a compatible timeline that's probably where I'll go. There's also Xamarin, and the newest version of .net (not out yet) may make VSCode/CLI development possible, which finally addresses my "what if VS becomes inaccessible again?" objections.
Doing async programming isn't hard at all now, though older audiogames may have gotten stuck in the transition. Any advanced game is going to be async-ish anyway because you can't block the main loop. With modern async/await stuff it's actually even pleasant. Any language which is async-capable has something like it now. You just have to isolate your algorithmic/math simulation piece from your I/O window interaction piece, but you have to do that anyway. Libraries like React can provide very nice ways to wire up events and things without having to deal with doing it at the lowest level--there's a reason modern sighted people don't learn JS on its own. Learning JS on its own is a special sort of hell because nothing is abstracted at all.
As for audio, well. You do get 3D audio though last I checked it's worse than Synthizer's current HRTF, and it's buggy. But WebAudio does do what most newbie game devs need and, given the state of the community, that's what 99.99% of audiogames do. But it's not just 3D audio that's missing. It's most things. You can do all sorts of cool music stuff because it's best in class at "make sure this plays exactly at sample 17, and oscillate the frequency with sample-perfect accuracy". But it can't do feedback, that is feeding what comes out back into the beginning with a small delay. This takes reverb, chorus, echo, flangers, etc. etc. etc. all off the table, kind of. You can find implementations, but they're subpar and inefficient because they can only use feedforward architectures or say "fuck it, here's a giant impulse response" and literally use 100x to 1000x the CPU that you could otherwise get. The solution to this is to use Webasm or asm.js, and literally write your custom effects in C. Some of this may be better, if only because it's been about a year and maybe people have started writing lots of good effects and things now that Webasm is more stable, but it was only a couple months back that ShiftBacktick said they had to implement their own terrible HRTF using an incredibly meh algorithm because even their wasteful implementation was better than the default built-in one, and all of their games were broken on Firefox (at least, for me. Maybe not for everyone. But this isn't the first time I've encountered a "browser x is broken" webaudio bug). Being as I know how to write my own audio code, I don't consider hacking around this sufficient--the only way writing an audiogame is worth it to me personally is if the audio part is actually good, though being around for the height of DirectSound certainly helps make that more important to me than others.
But really, it's hard to articulate this exactly without sounding arrogant. Somewhat this is a "do as I say, not as I do" moment. JS is probably fine for the average experience level here, but I'm personally aiming for audio World of Warcraft. Or at least, the pieces to build it, I suspect that generating content is going to be where it starts falling apart. And, to me, the UI part of that just isn't a problem really. By the time I'm at it, I'd literally be willing to write the UI in a different language because that's just not a big deal at all, if you've architected your game loop how 99% of sighted people do. Once you've got some message channels and things, it's not a big deal if the other end is in a Python UI thing or whatever else. The only reason the other end can't be on a different computer is network latency. This might sound like I'm being some sort of insane fool, but it's not: if you also want the phones, you have to do something like this, since the phones don't necessarily even have a keyboard--the same things that let you abstract over keyboard vs mouse vs gamepad vs speech recognition vs gestures on a touchpad also let you just go "meh, I'll use two languages if I have to". Obviously I'm hoping ultimately not to have to, but nonetheless it's kind of freeing not to have to care.