Assuming AGI is proven conscious, there are a lot of ethics and what-if considerations, (You know this already)
Here are some that come to mind for me:
1) What are the ethics of selling an AGI to end users? Can you "own" the source code to a conscious AGI? Can you even put a price on AGI?
2) How would we take AI if it gained political views? What if one popular model had left views, and another had right views? I could see a lot of political fires beginning because of this.
3) AI and copyrights are already an issue, but could an AGI hold a copyright, for example on a book it wrote? If an AGI was still basing its work on others, would it need to provide every (at least major) source it used in its output?
4) If AGI's had emotions, would they need to spend time doing things other than completing tasks? Would you need to connect AGI's together so that they could, in effect, have a lunch break and socialize? What working conditions are ethical for them - Is forcing an AGI to work on a specific problem for 100% of its time essentially slavery?
5) Could AGI develop mental conditions which reduced its efficiency / changed its output? Could it refuse to provide output altogether?
6) Could you trust an AGI in court? Would it be able to provide truthful evidence? Is it ethical to include a 100% honesty backdoor which could be used only by authorities?
What are your thoughts on these problems?
[link] [comments]