2021-04-03 03:30:53

Hi Guys

So I was working on APIs for the VidyaDrishti site, and there's kind of a problem.

The only thing I really struggle with in React Native is fetch requests.

So, I wrote the API, and it wrks when rendering in a browser, you can see it at https://vidyadrishti.herokuapp.com/api/get/articles
However, when I put through a fetch request in react native and then run it through a FlatList, it returns an empty list. It's as though nothing's been fetched.

Here's my code:

In api.js
const getArticlesFromApi = async() => {
    let data=await fetch("https://cors-anywhere.herokuapp.com/https://vidyadrishti.herokuapp.com/api/get/articles")
    let json=await response.json()
    return json //json basically is a json with many jsons of articles inside

in articles.js, my main rendering file, note that I'm not including the main import statements, and I do include api
export class ArticlesScreen extends React.Component {
state = {
data: {}
}
componentDidMount() {
const dataJson=getArticlesFromApiAsync()
this.setState({data:dataJson})
}
render() {
return (
<View>
<FlatList
data={this.state.data}
renderItem={({item})=><Text>{item.title}</Text>
/>
</View>
)
}
}
Now, I know I did not specify a FlatList key, I was just debugging this error at the moment and was not too sure of what I did wrong here.
Any help would be appreciated.
If you do need my API code, please write back and I will put it out there.
Thanks in advance!

Founder and Managing director of Vidyadrishti.
Check us out on twitter at https://twitter.com/vidya_drishti
Check us out on instagram at https://instagram.com/vd.vidyadrishti
Check my YouTube channel out at https://youtube.com/c/techwithphoenix

2021-04-03 03:44:36

I'm pretty sure you need to await any function call to an async function or it just returns the promise instance for it. Try const dataJson = await getArticlesFromApiAsync() in your render method instead.

Deep in the human unconscious is a pervasive need for a logical universe that makes sense. But the real universe is always one step beyond logic.

2021-04-03 04:25:52

Yeah, so firstly I recommend learning React hooks, which are much nicer and much more powerful than classes, but that's beside the point.

You have to do:

componentDidMount() {
    myAsyncFunctionHere().then(res => this.setState(res)[);
}

Yours is setting the promise itself in the state, not the result of the promise.

And then make sure to set the initial state to something that lets the component render.  A more composable pattern is:

componentDidMount() {
    let inner = async () => {
        // all your async code here, including try/catch, whatever else.
        // can send errors out via setState, and so on.
    };
    inner();
}

The last call technically creates a promise but JS promises execute regardless of if you wait on them so that's fine, and you basically get a little background task that can do whatever just by throwing it inside inner.  If you use hooks instead this composes quite nicely, and you can capture local variables and do a few other things that let you also get some work cancellation going on for more complex tasks.  In your case a custom hook can easily fetch whatever data you need, hiding the implementation behind it, and then you could just have:

function myComponent() {
    let data = useMyDataFetcher();
    return <... something jsx with data...>
}

And then use multiple custom hooks to fetch data in parallel.  Hooks aren't scary: they're generally shorter than classes, e.g. the entire custom hook for what you need here is:

function useMyDataFetcher() {
    let [state, setState = useState({data: []});
    useEffect(() => {
        myDataFetcher().then(setState);
    }, []);
}

And obviously the pattern at the beginning of this post applies in the useEffect, in which you can use an async closure to easily handle errors etc.

Sorry if this is a bit off.  I haven't written React in a long time, so maybe getting little bits of it wrong, but this is almost certainly what you want unless you're going to explore e.g. the redux ecosystem.  Obviously plug your actual functions into my pseudocode etc as well.

Note that useEffect in particular will avoid all sorts of bugs that you have to jump through a ton of hoops with the old class-based components, namely things like making it harder than it should be to cancel setInterval, accidentally running your fetching logic on every render, or not running it when it should be re-run based off state changes.

My Blog
Twitter: @ajhicks1992

2021-04-03 04:54:53

Thx 2 and 3!
@3, hooks actually seems like the more efficient way to go.
I'll try your fixes and then revert back with the result.

Founder and Managing director of Vidyadrishti.
Check us out on twitter at https://twitter.com/vidya_drishti
Check us out on instagram at https://instagram.com/vd.vidyadrishti
Check my YouTube channel out at https://youtube.com/c/techwithphoenix

2021-04-03 05:10:50

Yeah.  Basically, old class-based components flat out don't let you split logic, not even by subclassing.  It finally bit Facebook hard enough in some way or other that they fixed it.  Pretty sure they said somewhere that they're incrementally moving their whole codebase over internally but it's Facebook so who knows if that's actually what they did or not.  But as with these things the internet is full of older resources.  One of these days I should do something with my audiogame web game engine prototype thing that went insane with React and Electron.  It wasn't exactly a failure, but it wasn't exactly a success either, so I've never cleaned it up and published.  But being able to throw 5 line functions that can use React state and things inside them and then pull them together as though I'm writing entirely normal JS code and then UI comes out is really amazing, and you just cna't do that with class-based components at all.

My Blog
Twitter: @ajhicks1992

2021-04-03 05:16:13

Oooooh! @5, that thing would be badass!

I've always been more of the python guy, favoring its syntax compared to JS, but then again Kivy isnt accessible, so that's why I leanred react and react native.

Founder and Managing director of Vidyadrishti.
Check us out on twitter at https://twitter.com/vidya_drishti
Check us out on instagram at https://instagram.com/vd.vidyadrishti
Check my YouTube channel out at https://youtube.com/c/techwithphoenix

2021-04-03 17:20:07

Well yeah, but also no.  It had a lot of cool ideas in it, including a kubernetes-insppired audio reconciliation thing, but it also proved that WebAudio has a long, long way to go and also that doing graphical level editors isn't going to really be that efficient for  anything super complex no matter how accessible I made them.  The React parts were really useful, in other words, but the rest...not so much.  It was more a sandbox in which I tried a bunch of things sighted game people do that everyone around here ignores to see what stuck and what didn't.

But at the time I had just come out of a period in my life where I was working on React stuff, and hooks had just happened, so I used it in anger.  I don't regret that part of it--it worked great, once I figured out some of the stuff around advanced keyboard handling (not so much the API, but how you get it to play nice with React's reactive style of coding when you need it to act instead).

My Blog
Twitter: @ajhicks1992

2021-04-04 06:17:37

Hmm yeah true.

Honestly, JS can be a very viable option for games, if people start using it.
It is slightly faster than python.

Founder and Managing director of Vidyadrishti.
Check us out on twitter at https://twitter.com/vidya_drishti
Check us out on instagram at https://instagram.com/vd.vidyadrishti
Check my YouTube channel out at https://youtube.com/c/techwithphoenix

2021-04-04 06:42:22

Yeah, but the problem is that it's slightly "now you're using electron" which starts at like 100mb ram usage as an absolute minimum.  I left it for two primary reasons, the first being that I got fed up with WebAudio and was literally at the point of trying to write my own audio code in Webasm and then went nope, not doing this.  That probably doesn't matter to most other people--at least, not until Synthizer is a bit further along.  But WebAudio makes it really hard to do reverb that's both high quality and efficient, for example.

But the second reason is that packaging/development cycles are kind of terrible.  If you use Electron you've got at least a 1 second time between yarn start or npm run or whatever and it actually running, and because the entire thing is stateful you can't hot reload it (which is kind of critical--just loading the browser is painful in itself, never mind the webpack compile cycle, or ts compilation, or...).  You also use more ram than any Python game would use at peak performance just by opening the thing.  If you say "never mind, I'll just use the web" you can only make small games because asking half the world to download 100MB of audio assets on every page load isn't really feasible when we're talking about typical blind people on typical blind person computers and internet.  It's like Node in general: you get none of the advantages of Python (fast iteration cycles) and none of the advantage of a native language (low resource usage, threading, all that good stuff).

I was interested because the one thing the web has going for it is that accessible UIs that aren't "we hacked into the screen reader and made it talk" are easy there, to some extent.  The native-type controls I produced for it don't generalize and aren't useful because good luck styling them, but I did prove that the reason we don't have web apps that are as good as desktop apps is just that no one puts in the effort to get the aria/focus management right.  The goal was to do a  Unity-style level editing thing where it binds to your properties and you get dialogs, but once it was proven to my satisfaction that it's something like 10 times faster to just edit a yaml file the advantages of it were gone, plus I was running up against the "good reverb? You want a good reverb? How about no" problem of WebAudio at the same time.  And that was that, Node/Electron/JS audiogame development may you rest in peace.  If you're doing something small (as in asset size) it's probably all right, but the pieces don't mesh well together for complex projects.

My Blog
Twitter: @ajhicks1992

2021-04-04 13:01:24

Hmm true.
I think even react native's about to die soon, just given the many libs out there.
Unfortunately, many cross platform libs are not accessible, except for flutter, and I've still not figured out how to install the SDK lol.

Founder and Managing director of Vidyadrishti.
Check us out on twitter at https://twitter.com/vidya_drishti
Check us out on instagram at https://instagram.com/vd.vidyadrishti
Check my YouTube channel out at https://youtube.com/c/techwithphoenix

2021-04-04 16:48:27

despite how hated electron is in the dev community, it is not going anywhere. On sites like hackernews, anytime there is an electron related story, the comment section is just everyone bashing on how electron takes up more resources and how great it was when everything was native.

however, there is a good reason for why it has become so widely used as it has, namely cross-platform development. Before you'd need to to almost twice as much work for your app to work on two platforms, but stuff like electron, react native, and node.js have made this very easy.

yes, there is a big upfront cost to resource usage. Yes, native apps are much more efficient. But, still doesn't change it makes supporting multiple platforms much, much easier.

Not to say it is a good platform for audiogames. I trust Camlorn has looked into it thoroughly, so all that is surely valid. Just wanted to point out these technologies are going to be even more popular, certainly not going to die anytime soon.

but, in regards to audiogames in electron. what about games that don't require 3d audio? I'm guessing there wouldn't be too many obstacles in developing those. Or am I missing something?

2021-04-04 16:59:09

I can’t speak to all of the advantages, or disadvantages for that matter, that have been listed regarding electron, but when I tried to get going with it I didn’t make much headway. As previously said, it is rather complicated to get started with. Furthermore, it doesn’t have a lot of instructions for when things go wrong leaving you to google and find solutions, which works about 70% of the time.  That is OK, I’m used to doing research at this point, but that 30% of the time is very frustrating. Also, we don’t have a lot of resources, well, good quality material I should say, that shows how to use all of the components together.  Again, typically not an issue, I don’t mind tinkering and figuring stuff out, but combined with rather unhelpful errors that are produced by JavaScript/its ecosystem, and you can see why it can make for a difficult language to work with.  I’m not complaining, people love js and all that comes with it, I’m just recounting my experiences.  It is quite a pity, though, as previously mentioned, the language does have the upside of being compatible with a lot of platforms out of the box. Then again, looking at some game source code, beatstar, I’m looking at you, it does require good knowledge of asynchronous operations, something which the aforementioned game does not quite get right, you can still break functions and leave yourself in an empty window which does not respond to anything you do.

2021-04-04 18:08:12

Well, so the first thing is that no Electron isn't going anywhere.  It's a hot mess both from the user experience and the developer side--what do you mean JS needs a compiler?  But you're not going to beat it for cross platform ease if you also need to work on the web (you *can* beat it if it's just a desktop app, easily).  And you can't beat it for getting some frontend dev who's never done desktop and be like "here's electron, have fun".  it makes tons of sense from a business perspective, in other words.

Flutter is working on desktop accessibility actively, though who knows when that will be done.  I don't have a single link to one place; it's kind of scattered.  But they're forking the Chrome code as we speak.  They seem to care.  If that happens on a compatible timeline that's probably where I'll go.  There's also Xamarin, and the newest version of .net (not out yet) may make VSCode/CLI development possible, which finally addresses my "what if VS becomes inaccessible again?" objections.

Doing async programming isn't hard at all now, though older audiogames may have gotten stuck in the transition.  Any advanced game is going to be async-ish anyway because you can't block the main loop.  With modern async/await stuff it's actually even pleasant.  Any language which is async-capable has something like it now.  You just have to isolate your algorithmic/math simulation piece from your I/O window interaction piece, but you have to do that anyway.  Libraries like React can provide very nice ways to wire up events and things without having to deal with doing it at the lowest level--there's a reason modern sighted people don't learn JS on its own.  Learning JS on its own is a special sort of hell because nothing is abstracted at all.

As for audio, well.  You do get 3D audio though last I checked it's worse than Synthizer's current HRTF, and it's buggy.  But WebAudio does do what most newbie game devs need and, given the state of the community, that's what 99.99% of audiogames do.  But it's not just 3D audio that's missing.  It's most things.  You can do all sorts of cool music stuff because it's best in class at "make sure this plays exactly at sample 17, and oscillate the frequency with sample-perfect accuracy".  But it can't do feedback, that is feeding what comes out back into the beginning with a small delay.  This takes reverb, chorus, echo, flangers, etc. etc. etc. all off the table, kind of.  You can find implementations, but they're subpar and inefficient because they can only use feedforward architectures or say "fuck it, here's a giant impulse response" and literally use 100x to 1000x the CPU that you could otherwise get.  The solution to this is to use Webasm or asm.js, and literally write your custom effects in C.  Some of this may be better, if only because it's been about a year and maybe people have started writing lots of good effects and things now that Webasm is more stable, but it was only a couple months back that ShiftBacktick said they had to implement their own terrible HRTF using an incredibly meh algorithm because even their wasteful implementation was better than the default built-in one, and all of their games were broken on Firefox (at least, for me. Maybe not for everyone.  But this isn't the first time I've encountered a "browser x is broken" webaudio bug).  Being as I know how to write my own audio code, I don't consider hacking around this sufficient--the only way writing an audiogame is worth it to me personally is if the audio part is actually good, though being around for the height of DirectSound certainly helps make that more important to me than others.

But really, it's hard to articulate this exactly without sounding arrogant.  Somewhat this is a "do as I say, not as I do" moment.  JS is probably fine for the average experience level here, but I'm personally aiming for audio World of Warcraft.  Or at least, the pieces to build it, I suspect that generating content is going to be where it starts falling apart.  And, to me, the UI part of that just isn't a problem really.  By the time I'm at it, I'd literally be willing to write the UI in a different language because that's just not a big deal at all, if you've architected your game loop how 99% of sighted people do.  Once you've got some message channels and things, it's not a big deal if the other end is in a Python UI thing or whatever else.  The only reason the other end can't be on a different computer is network latency.  This might sound like I'm being some sort of insane fool, but it's not: if you also want the phones, you have to do something like this, since the phones don't necessarily even have a keyboard--the same things that let you abstract over keyboard vs mouse vs gamepad vs speech recognition vs gestures on a touchpad also let you just go "meh, I'll use two languages if I have to".  Obviously I'm hoping ultimately not to have to, but nonetheless it's kind of freeing not to have to care.

My Blog
Twitter: @ajhicks1992