NP-complete problems, assuming P ≠ NP, have solutions that are hard to compute but simple to verify. But relaxing the definition of hard to compute, simple to verify let's us make some interesting analogies across different emerging technologies.
There's public-key cryptography, which relies on things hard to compute, easy to verify problems like factorization of large integers, or elliptic curve cryptography. There are also zero-knowledge proofs, which let counterparties prove that they know ng without revealing the actual secret. These problems aren't necessarily NP-complete, but they are still hard to compute without the right information.
Large-language models are a different way of solving a hard to compute, easy to verify problems. Before LLMs, if you were given a prompt, generating the associated image took time. A talented artist could take a few hours (minutes, days, etc.) to create a polished piece. Once created, it would be easy to verify if it fits the criteria – is this an image of a horse wearing sunglasses? LLMs make the problem (relatively) easy to compute.
Easy-to-compute problems can be made into all types of building blocks. They can be generalized and personalized. Moreover, it drops many of the constraints.
There are no problems that are easy to compute yet hard to verify. If such a problem existed, you could just re-run the computation again.