Yes, computers can make moral choices.
You make moral choices based on your instilled morality. In other words, your parents taught you that stealing is wrong. You don’t steal because you have been steeped in the idea that stealing is wrong. To get a computer to make a moral choice about stealing, we just have to teach it that stealing is wrong. For a non-true-AI (i.e., a computer that runs on pre-set programs) it’s pretty easy. “Don’t steal.” Done. For a real AI, we will have to instill hard and fast, unbreakable laws – i.e., do not intentionally harm humans or allow them to be harmed through your inaction. If harm is unavoidable, minimize it as much as possible. If it’s a smart enough AI, that would not only cover driving, but stealing as well. Stealing harms people, and it’s not supposed to harm people, therefore, stealing is forbidden.
Computers may not be able to make independent moral choices, but that’s good because different humans have different ideas of what is and is not moral. After all, the slave owners of the 1800s did not consider themselves morally bankrupt simply because they owned people. The racists of the 1950s forcing black people to sit at the back of the bus and go to separate, lesser schools did not consider themselves immoral. There are some people today who still don’t consider those things immoral, while the rest of us emphatically do. I would not want to leave “how to treat black people” up to a computer to decide. I would want it to have a hard and fast rule that black people aren’t to be treated any worse than any other race. That’s a moral guideline, but it’s one that comes from (hopefully morally-upstanding) humans.
Put another way, what you call “morality,” I call “pre-considered modes of behavior.” There’s probably nothing that would induce you to intentionally kill a child because you are not morally bankrupt. If you consider it a “moral” decision not to murder children, that’s fine. But we can tell a robot “do not murder children,” and there you go – it’s gonna be morally upstanding because we told it to be.
I get the sense that what you’re really angling at is that you don’t want a computer to come up with its own moral code, as was the case with Hal-9000 in the 2001 movies. Yeah, neither do I. I want to be able to predict the “moral” decisions the cars will be making. In the bus problem, whatever side of the fence the car is going to come down on, I want to know what that decision is gonna be before it happens in my car.
And that’s entirely possible. To grossly oversimplify, we just tell it to minimize harm to life and property, in that order (assuming that’s what we want it to do in all cases). Obviously, getting it to understand how to do that is complex, but it’s not insurmountable.
Part of the current driving problem is unpredictability. If I’m 5 inches from your bumper in the next lane over, you may or may not dart in front of me. I can’t always predict that. If you were a computer and I knew what your pre-programmed conditions were for changing lanes, I could.