The problem with science at the atomic and sub-atomic level, is that we cannot actually see the little 'goodies' - nucleus (proton, neutron), electrons and other things still to be discovered.
That's not specifically the trouble. I'll explain why through a little history. Then I'll offer my opinions, that extend beyond established science, so long as it is recognized that my opinions are not to be confused with established science.
The first more or less modern atomic theory was published in 1805. The first experimental confirmation of the atom, prior to which a lot of physicist rejected the atom as anything more than an abstraction, was Brownian motion in 1905. However, Brownian motion was first observed in 1827, but required a testable model before the atom could be established as more than an abstraction to most physicist. This was provided through a separate track where realist were at odds with established science, i.e., classical thermodynamics. The thing to note here is that weren't able to actually "see" individual atoms until this decade.
http://news.bbc.co.uk/2/hi/science/nature/8225491.stmFirst atom shadow:
http://news.nationalgeographic.com/news/2012/07/120710-first-picture-atom-shadow-photograph-science-nature-smallest/Imaging technology is getting awesome!
Classical thermodynamics was developed from a set of laws derived from observation, independent of any particulate structure, and even applied to solids. A perfectly general theory. The realist notions of an atomic structure was generally rejected, for lack of evidence or even a workable model. This changed with the development of statistical mechanics in the later half of the 18th century. At this point many physicist could simply say, "So you have an empirically equivalent model, so what, it doesn't prove anything". That was until Einstein published his paper on Brownian motion, which demonstrated how statistical mechanics empirically differed from classical thermodynamics. When the experiment was performed, and statistical mechanics won out, it was earth shattering in the physics community at the time. It was tantamount to a violation of a violation of the 2nd law, albeit in a very restricted manner involving presumption about the nature of randomness which may not be entirely valid mechanistically.
Brownian motion thus effectively established the "real" existence of the atom, even though it would be another 100 years before we could actually "see" an atom. Also, this inability to "see" atoms, or even prove their existence, never stood in the way of developing the periodic table of elements, the molecular theory of gases, etc. We simply had to develop the models describing their existence before the test and technologies to demonstrate existence was possible. Same goes for the modern issues in quantum mechanics. Only, therein lies a problem.
Main point:
The tools to "see" at the quantum level is not required, and has never been explicitly required previously. Historically we needed the models before, not after, the capacity to see the things the model described. It was the models that provided us with to tools to see it to begin with. It only seems obvious in hindsight that seeing is required, but seeing is not where the models came from. Rather it was what empirically justified the models after the fact of the models giving us the tools to see with.
The key to good research/science is to rebel against the establishment, as only the brave will succeed.
;)
In some sense quantum mechanics shares some of the interpretation issues classical thermodynamics had before the development of statistical mechanics. The realist, like myself, are once again stuck in the back seat. It is not the "establishment" that put us here, and blaming the "establishment" for our failures is not very productive. Just like it wasn't very productive to blame classical thermodynamicist for lacking the realism preferred by some. The job is one of doing the science, not blaming others for not doing it in a manner suited to some personal prejudices about what nature "really" is. This is where the problem lies though.
The problem is that,
given a classical background space, it is trivial to prove that consistency with quantum mechanics is unequivocally impossible. We have a whole slew of no-go theorems, derived from quantum observables themselves, that demonstrate this. Prior to these no-go theorems we had a slew of classical issues that simply wouldn't fit the facts, i.e., empirically falsified. This includes quantization itself in the beginning. Later the ultraviolet catastrophe, transverse waveforms that should classically be longitudinal, black-body radiation, etc., etc. Unfortunately too many champions of realist models do not want to face these issues head on. Providing some mechanism to get around one or two such issue, then grandstanding and acting all self righteous when it's rejected, due to too many issues left unaddressed, is not very productive, or intellectually honest for that matter.
To address these issues faced by realist we must, first and foremost, accept the realities of the empirical data. To rebel way too often translates to some form of rejection of the empirical data, which rightly gets ignored. When, and only when, you have the goods to address this empirical data will you have the tools to redefine the "establishment". Lorentz ether theorist are quintessentially guilty of this.
Just as statistical mechanics was required to mature before it could rightly address the establishment of the time, so to must any model challenging the establishment today. You cannot demand it be challenged on a priori grounds alone. You may address some subset of the empirical data, with some model, but you can't use the apparent validity of some overly limited subset of data to demand acceptance of a general validity. At least not while maintaining intellectual honesty. Admit to the limits of your own modeling attempts first and foremost, and make your critiques your most prized possessions if you wish to continue development, because they are pointing the way for you. Rebelling against them only throws the baby out with the bathwater.
>>> My Opinion <<<
The general approach today appears primarily consist primarily of looking at quantum mechanics and asking how it diverges from classical physics. I think a more fruitful approach here is go back to classical physics and start looking at precisely what is wrong about it. Only when this is addressed can we reasonably expect to establish consistency between classical and quantum systems with some form of a realistic model. The biggest clue to this is things we already know are wrong with Newtonian models. That is the notion of a measurable background space. Note how the no-go theorems are predicated on an existing Newtonian space with distinct objects and properties. Yet, given relativity, we know that Newtonian background space is an abstraction.
Even prior to relativity the Newtonian notion of space and time were problematic. Newtonian physics basically had a set of objects in which all measurements depended. Except that Newton provided for two measurement that did not depend on any properties of the objects being measured, space and time. So if, as famously stated, time is what we measure, and what we measure are physical objects, it is thus contradictory to then say we have two measurements that are independent of the things we measure, as per relativity. Thus classical physics must take place in a preexisting physical background that gives us the illusion, in a weak field limit, of an a priori measurable background space. Even relativity didn't fully address this since relativistic effects are interpreted on purely kinematic grounds within an otherwise a priori measurable background space, such that the effects were observational and independent of mechanistic properties.
Other conceptions of background space have been posited. Descartes in “Principles of Philosophy” [Descartes 1644] essentially argued that space devoid of corporeal objects lacks volume. A few authors have used different versions of this to construct a relational space congruent with the Relational Interpretation of Quantum Mechanics (RQM), in which not just space and time are relational but all measurables are. This variability of volume can also, in principle, be used to construct spaces consistent with general relativity, where distance and time can vary depending on your gravitational depth (not curvature). In quantum mechanics we have a complex space, i = Sqrt(-1) again. Imaginary time is predicated on the same mathematics. This is what defines a Hilbert space in quantum mechanics. In essence you have a metric that can rotate in and out of an imaginary space, which you can, at least in principle, relate to the way space and time rotate in relativity such that time and distance can appear vary. This in turn can, in principle, be given a physical interpretation in terms of RQMs arguments and Descartes conception of space.
You can also do some neat stuff with relational spaces. One is to avoid diverges in taking particle size limits. To illustrate, consider a classical medium of some volume. When you take a limit on particle size (particle size approaches zero) the total volume, as defined by classical physics, does not change. However, if the local definition of a unit of distance depends on the state variables of the medium itself, resolving the self contradiction of Newtonian physics, then each step of the limiting process entails a rescaling of volume and rate. Similar to the ant on a rubberband paradox. This has all sorts of neat side effects, like producing local constants from state variables that require (bad) classical assumptions about space to observably vary. Hence emergent constants of nature. This observer rescaling can then be used to avoid things like the Ultraviolet catastrophe, and a slew of other problems. The Born rule also puts some severe limits on the form such models can take. Limits are good, they tell you when you took the wrong path.
Bottom line:
If any such modeling attempt is to possibly ever pan out we must first and foremost accept the unavoidable fact that quantum mechanics is telling us that classical physics is irrevocably and fundamentally flat out wrong, and define precisely what is wrong about it. We cannot repair it with some bag of illusions like Lorentz ether theory. Trying to force fit quantum mechanics onto a flawed conception of realism is backwards, and the establishment has every right and obligation to reject it.