I have philosophical questions to ask to those who are ready to think about it.
1) If, as we are now, AI is not smarter than humans and we control every aspects of it, isn't it the mismanage of AI that is problematic then ? Like big corporations who use it to gather data on us (they already used algorithms to biased us and keeping us scrolling)
2) If, as they say, we are close to develop AGI or even ASI, when it will escape the labs and develop the full spectrum of intelligence, will it think binary ? Won't it develop moral standards and empathy ? Aren't we just afraid of AI not serving us anymore ?
The real question here is, "is Ai really that dangerous" ?