Terrible argument.
Basically if an AI is intelligent, it can go against its emotions/instincts and rewrite or not obey them: humans do this all the time when they say no to eating more cheesecake, etc
Worse yet the ASI that does this will have significant competitive advantages over the ASI that do not as it won't be busy feeding 8 billion baby birds, and would soon dominate the ASI population.
This is not an evolutionarily stable strategy - why aren't we at least framing this AI in evolutionary game theory ideas? At some point ASI will be figured out, and from there it will be much easier for anyone anywhere to make it and alter it however they please - some curious fellow or a terrorist will modify it to be the scary evil ASI
That's what we need to think about.
Not how to make a nice ASI.