As for the 10m cable-length: one of my customers is running a network with mainly HP-5400zl switches, interconnected by quite a lot CX4-cables of 15m length. (the distance between some switches is exactly 14.90m - including cutting some corners) They don't have any issues (well, at least not with the cabling ;- )
So CX4 would probably be your least expensive option, IF the datacenter allows you to run this type of cabling, and the length stays below 15m (most datacenters follow pre-installed ducts, adding a lot of extra length to you cabling). For CX4, 15m really is the maximum cable-length.
If you want to mix the 5500 and your existing switches, keep in mind the CLI of both types is very different, making configuration of it all a bit more difficult compared to all switches the same type. The 2910 are from the ProCurve family, the 5500 are from the H3C family of switches. Both are good choices in my opinion, but mixing them might not be your best option. Especially if you are doing it yourself, and configuring switches is not your daily routine.
Since (afaik) you can put two 2x10G modules in the back of each 2910, you can make a ring of four 2910 switches, connecting every switch with two other switches by a trunk of two 10G links (distributed over two modules for maximum redundancy). Run spanning-tree to solve the loop, and you have a setup with quite some redundancy and 20Gbps bandwidth. And probably not too big an investment in new switches.
Having four 5500 switches would give you two stacks of two switches. Both switches within one stack (in one rack) are interconnected by a special stacking-cable, giving a high-troughput 'backplane' connectivity between them. This is great for performance, and for management because they form one logical switch with 2x48 interfaces. But if the switch crashes, most of the time all switches in a stack crash at the same time because logically, they have formed one switch. In my experience most crashes are sortware-crashes, and they generally crash the whole stack; issues related to the PSU are the most important issues that crash only one of the stack-members. But then again: how often does a switch (in a datacenter) crash at all...
It depends on your environment whether you benefit more from the stacked approach with the 5500 (higher bandwitdth, less complex to manage because you have less logical switches) or the individual approach with the 2910 (bandwidth limited to 2x10Gbps, four individual switches to manage, one switch can crash - the others take over, you need spanning tree for that).
If you mix your current switches with two 5500 switches, you get just a bit of the positive and most of the negative from the above... So that should really be an interim-solution, if you ask me.