only an approximation, we see positive results which in the future could lead to the development of near-human or human levels of intelligence simulation. Of course, the border between intelligence and sentience could be argued, and questions posed as to whether it is truly sentient. The problem with your statement is you seem to suggest it is completely impossible, whereas we do not yet know. For all we know, an artificial intelligence simulation could develop self-awareness in the future. Not likely, but we can't say impossible.
OK I recant my statement and put forth, CURRENTLY software IN ITS CURRENT FORM cannot randomly become sentient, and development of such technology in our lifetime is highly unlikely.
I like the way you say neural networks have has positive results since pretty much all of them have been massive failures.
look for a program to become sentient "randomly" you would need it to be able to edit its own coding (to make it so it can operate outside the bounds of its original coding) which it wouldnt be able to do because its not self aware (sentient). so for a piece of software to randomly become sentient it would require sentience to do so extremely unlikely even with advanced programs.
the only other way to randomly achieve sentience you would need independently evolving code which would kinda hint to start with that sentience is a possibility (so it wouldn't be unexpected sentience) and no1's gonna stick something that's changing its own code in charge of the worlds weapons.
(yes I have put this in very simply forms and substituted the word "code" for various other terms and not all terminology is correct just incase any computer wizz kids read this and moan)