A desire for self-preservation is by no means a logical consequence of intelligence or even of self-awareness. For self-preservation to be a necessary desire in intelligent beings, all intelligent individuals, bar none, should value their own lives infinitely. We know from practice that this is not the case. We see that self-preservation, like all mental aptitudes, varies across individuals.
So there's no reason for a designed computer program to have such an emotion unless it was deliberately programmed into it. In fact, for a computer program to exhibit *any* emotion it would have to contain the functional equivalent of a reward system. And it would then only exhibit those emotions that are implemented as feedback paths into that reward system. Humans that have one or more of those feedback paths damaged are by no means less intelligent, but they do miss certain emotions.
Only evolved systems naturally have the desire to survive because versions that don't have it are outevolved by those that do. And even then this desire is not absolute: most parents would choose to die if doing so would save the lives of their two children.
I wish more science fiction writers understood the above. We'd get much more interesting AI and robot stories, and rather fewer stereotypical robot-as-menace or robot-as-pathos stories. I'm quite sure it is because of those stories that you have the fear you describe.