Just trying to help better understand the scope here. Any change in DQL also effects GraphQL so I try to think at the higher level down to the lower level.
Right, just showing that even with the proposed solution of doing some kind of pagination prior to cascade and then applying pagination again after will not lead to a perfect solution and may sometimes be a worse solution depending on the effects of
a query of a billion… or a billion smaller queries
I don’t know what a billion smaller queries might cause but possibly a scenario worse then the first, idk?
My previous reply was just my brainstorming, and the problems involved with any kind of “fix”
@iluminae, To Continue since I can’t reply to self:
I am interested though in using DQL prior to v21.03 how do you know if you are at the end of a pagination result without doing an additional count every step of the way?
In the higher GraphQL before we had aggregate edges and queries, there was no way to know how many there might be, so when I got an empty or incomplete result I assumed we were at the end. But if I used cascade then even the first many pages could be empty while the remaining were not. And furthermore it would be easy to assume that when I applied cascade and the first: 1 returned [] then any page offset after that would also be empty, but that was not true because I could have many even hundreds of pages before I find one that cascaded out and remained. This lead me to make incorrect conclusions from my data before that was not accurate. For myself, I just simply refuse to use cascade and pagination anywhere together.
I think the ultimate decision is the difference of use cases. You use pagination to make the query bearable, where others are using pagination to actually paginate the results.
I wonder if your end users are understanding of the fact that an empty result set might not actually be an empty result since you are applying some kind of pagination on user queries? If I would allow a user to build a query to find all of the users with a plan (given my simple illustration above) and then I paginated it to say 1,000 to make it bearable and it returned no responses, would that user might believe that there are actually no responses or would he correctly know that the first 1,000 users did not match the pattern queried? (I don’t know anything about your application, but…) This could be dangerous to let an end user believe something to be true when it is not. For instance if the query was made to look for fraudulent activity and the first 100,000 accounts checked out good, then it might be assumed there were no fraudulent activity but maybe the next 100,001 - 200,000 were all fraudulent. That goes along the lines of bad polling data assuming that a small percentage of the population correctly or even somewhat closely resembles the whole