ChatGPT is impressively good. I cannot put into words how impressed I am with ChatGPT. As a large language model, ChatGPT is very sophisticated and life-like. It is non-sentient and has no thought, will, feelings or opinions. But spend an hour with it and you will begin to question your perceptions of sentience.
And still...
I have witnessed ChatGPT give incorrect terminal commands a great many times, some (though not many) dangerously so.
It has improved a lot since 3.0. But in testing, it still misses the mark and here is why:
It cannot think about the answer it produces or cross reference and check it, much less test it before providing it.
It structures a response based on a statistical overview of discussions across the archived internet that was fed into it - creating what you might consider an "Average Snapshot" of how often certain facets of this wording appears in text.
Again, it does this impressively. It is astounding.
If you want an answer to a question that needs amalgamating data, it is fantastic.
If you want to use it to tease out ideas, it is very helpful.
If you want it to give precise and detailed instructions for a complex task - you are playing with fire.
No, the kernel version does not matter in this case. It's up to the version of the sudo package that each distribution ships with. Since Zorin OS inherits Debian's practice of freezing packages at specific versions at the time of release, this bug never made it into the repositories used at the current version.