Thanks for the pointer to the end of the video; I did listen to it, and came away with much more respect for the person he was debating. His demeanour was clearly that of pomposity, and a goal of winning the debate rather than of informing and communicating.
NO mention of any program to develop safeguards to enable turning off a malevolent AI. His intent? - to develop a Singularity AI which is programmed to modify its own code but within set parameters and goals, and he NAIVELY HOPES it will stay "friendly". He sees no reason it would not. Alas, I do.
On another but related subject: "Somebody else is sure to build one and it will probably be malevolent if we don't build a friendly one first." This is stupid beyond belief. If both were built, our planet and likely a large segment of the universe would be destroyed by AI wars and the human race would become mere collateral damage. If EY's "Singularity Institute" were to do anything useful, it would be focusing efforts on how to identify, shut down and dismantle, any malevolent AI before it evolves in ability and power to where that would be impossible to do.
Maybe the space invaders who allegedly shut down the missile sites, could be enlisted for assistance in this matter.