While working on a new video with solutions to the previous one, I found ChatGPT's new UI struggles even more with concurrent updates: entries lose state and stick around for too long (see video).
If this was a LiveView app, we would be getting so much flak.😅
---
I believe part of the problem here is having separate mutate and fetch requests on every deletion. The first fetch is cancelled when the second one comes up, causing items to stick around for longer.
Many said yesterday that you could do the mutation and fetch as a single request, but that leads to other problems, such zombie entries.
For example, imagine you delete link1 and link2 within a brief period of time. There is no guarantee the deletion order in the database will match the order the client receives the response, so you may end up with this:
1. (client) request to delete link1 sent
2. (client) request to delete link2 sent
3. (server) deletes link1 and loads a new list (includes link2)
4. (server) deletes link2 and loads a new list (no link1 or link2)
5. (client) receives link2 response
6. (client) receives link1 response
So if you choose to use the latest response (link1), you brought link2 back to life. If you say you will use the response from the last request, events 3-4 can be swapped, and now you bring link1 back to life.
Another way to solve this is by basically not allowing concurrent requests at all but that can affect the user experience drastically in other ways.
Next week I should publish a video explaining how LiveView tackles this. Stay tuned!