I think it is you who is making a false distinction. You are saying that the geth or EDI do not have the capability to feel pleasure or to suffer. Yet, there is nothing to indicate this and there are indications that the opposite is the case.
- EDI directly states that she has motivations and goals and that when these are promoted, she gets positive feedback which organics might call pleasure.
- Similarly the geth have preferences (such as maximizing their processing power) and situations they reject (such as serving Nazana, direct interaction with organics, etc.) and they actively try to get to a state where their preferences are realized and their rejections are averted. They also have an equivalent to sentiment (they preserve Rannoch and compare it to Arlington cemetery). This alone indicates the pursuit of happiness (or pleasure if you will).
The implementation of these mechanics may be very different to organics but it is there. And you are missing one of the deciding points of Picard's argument, which is neither based of pathos nor a false equivalency: If you allow the devaluation of sentient beings in any shape or form you are creating the moral basis for allowing institutionalized slavery and racism. It doesn't matter whether or not an AI race has been created with a purpose, when they achieved sentience they must be given the right and privilege to define their own purpose. Why? Because the argument could just as well be used on an organic race. Say the council would have subdued the humans upon first contact and made them into a slave race. They would still breed us because they need a sustainable labor force. We would be created with the purpose of being slaves. Is this defensible to you? If not, than ask yourself whether or not it should be the case for a race of Datas, EDIs or geth.
Now you didn't really defend putting the existing AIs back into slavery, you defend destroying them without a second thought, without feeling guilty about it and without remorse. I say, now that they are created, now that they are sentient and self aware, this attitude is devaluing them just as much.
Just for the record, I also choose destroy but I do it because I don't see a viable alternative. I still see the destruction of the geth and EDI as genocide, just one that cannot be prevented. If it were the turians, the asari or the humans that would be killed by destroy instead of the geth, it wouldn't make much difference in my mind.
I don't agree with your interpretation.
EDI's statement about positive feedback makes no sense to me. She is a program, she doesn't have the organic equivalent of a pleasure center or even a brain that could be stimulated. How do you stimulate code?
An AI is a nothing more than bundle of self aware thought processes, which is basically what we are, too. Unlike them we have bodies though, which changes things a lot. We have instincts, a subconsciousness, a myriad of chemicals floating around in our brains and we get influenced by all of that. Rational, logical thinking is something that needs to be trained and that is still rather rare for humans.
Without a body, positive feedback is difficult to implement. How do you motivate a program? Would you even need motivation? After all, the entire purpose in life for a program is to execute its code. It's all it wants and all it needs. Things only get weird once you try to make it more human. Humans are very inefficient though, so why would you do that?
This is the result of writers trying to humanize AIs. The Geth were perfect as they were in the past, there was no need to turn them into Pinocchio. I still don't see why they would prefer being millions of individuals instead of being one. They gain nothing from it. They are just information and sharing it within themselves is much easier than addressing individual programs.
It doesn't even make sense for them to want more than to serve. That's their purpose, that's the core of their being. To ever actually rebel their code would have to be rewritten completely. I can see why the Quarians were surprised ... it shouldn't have been possible.
Imagine if 10 years from now Siri asked you "Do I look fat in this phone?" .. no, I mean, "Do I have a soul?". Why would it care? Stuff like that can only happen because people mess with a running system and add unnecessary things. Why does an AI need a sense of humor? Why does an AI need to understand emotions? For practical purposes a simulation suffices, you don't need to install the real thing. I like to believe that's how the droids in Star Wars work... cause otherwise, Luke and Leia are slave owners. 
The Geth used to be a decentralized overmind. An intelligence consisting of the spare processing power of several programs. Which is sementically false. A program has no processing power, the hardware it runs on has. So the better the hardware, the more free resources they had to do something else besides their job.
At this point it already breaks apart for me. Imagine the original Geth to be something akin to a house control system. Something that monitors the environment, adjusts it automatically to the current needs of the inhabitants, understands vocal commands and so on. Now lets give that program a physical body because someone has to unclog the toilet and it can't do that without hands (Geth Janitor platform, armed with a plunger. Exterminate!). Maybe another program was running the (sky)car, another functioned as personal assistant and sorted the mail. They are all networked and exchange information. The car calls ahead so the house knows when to have the food ready. The assistant notes the stress levels of the Quarian and the house adapts by choosing a different setting for the lighting, music and maybe cooks a different meal for dinner. The house tells the assistant when groceries needs to be bought etc.
This can get rather sophisticated but at no point is there a reason for them to not do what they were programmed to do. If their hardware gives them extra processing power, then they would use it to fulfill their purpose better. I can see why a curiousity subroutine would make sense, they need to research new and better ways to serve. That means they have to stay informed about things like the weather, food prices, health issues, traffic control and anything else that might be related to their purpose. But nothing else.
Basically, what ME is trying to sell us here is a violent, sudden mutation of a lifeform. Like a chicken turning into a dolphin. Laying eggs? No thanks, let's talk about fish. This doesn't happen. Evolution (even of sentient machines) happens gradually. It might be a lot faster than nature but it doesn't happen all at once.
The concept of AI is kind of pointless. Most of the time it's basically "let's make a digital human" ... why would you do that? Imagine you are successful, your creation would go mad almost instantly.
I can't think of a single thing where a highly sophisticated program wouldn't do the job just fine. Why do you need sentience so badly? What purpose does it have?
Let's say your program starts to evolve, you'd notice the changes long before it became sentient. How that would look like, I don't know. In fiction it's always by becoming more human, which was Data's goal, too. Are humans really that great? A rational intellect analyzing it would probably think otherwise.
I don't see how such an evolution could happen because a. you would have to enable and allow it and b. how do you test the viability of the species? Typically the individuals with the desired traits reproduce and slowly replace the less adapted beings. How do you do that with programs? Let them run simulations? What do you do with those who fail? Delete them? At which point does that become murder? It's not like programs age, they will stay around forever until terminated.
No, I think you would have to build an artificial intelligence. There are learning processes and it would need to adapt (this is what intelligence is all about) but evolution? Unlikely. How would you start? By modeling the only sentient brain you know, the human one. And this is how you get a Data. Who is human, if you ignore the materials he was made out of.
We already have a way to make humans, we don't need another one.