Emergent behaviour can occur, not a problem. But if you study such systems, I think you will find that the emergent behaviour is based on the programming involved and is not "beyond the programming".
When it comes to intelligence, this is not something that we can say is actually emergent.
There are currently a number of projects that are looking into intelligence and free-will. There are researchers on the same teams who hold quite different opinions - the results for these projects are not at all conclusive.
I admire you for pressing home your point that others are missing. I practice a visual art form (which I won't name; many other smaller cultures around the world too have their own) which will never "emerge" from AI _unless_ it is programmed in, or trained on the visual art itself. Even though, I don't see how it could ever figure out the intricate detailed meanings without it being programmed. The people trying to counter you are thinking only within the culture within which these AIs have been created, and thus it does seem to them that anything AI creates is emergent because it seemingly created soemthing they haven't, didn't, couldn't, wouldn't. Without the programming (never mind the electricity), AI is still a blunt tool.
It is shocking to me how many people miss the fact that the big prediction machines trained on lots of data, are fundamentally historical and based on that data?
When it comes to intelligence, this is not something that we can say is actually emergent.
There are currently a number of projects that are looking into intelligence and free-will. There are researchers on the same teams who hold quite different opinions - the results for these projects are not at all conclusive.