Doesn't the Turing Test determine only if a machine can fool people into thinking it is intelligent?
If fooling people is the defining characteristic of AI, am I a fool to think that AI can produce work of concrete (non-foolish) value?
Richard Sutton - The future of AI https://youtu.be/ThFq87Rp21s