> Due to sensitive dependence on initial conditons. Even using measurements at meter resolution will cause the accuracy of a forecast to begin to break down after only a few days.
That's an extremely simplistic take on things. In reality, one of the largest issues with high-resolution weather forecasts (1-3 km scale, convection-permitting simulations) is the fact that you small errors in the initialization or model dynamics lead to changes in small-scale storm structure that feedback onto larger scales of motions, disrupting the forecast. Ultra-fine measurements and simulation resolutions only exacerbate this tendency.
> Anywhere from accurate to exact.
You didn't answer the question. Are you trying to predict convective initiation at 100 days lead time? Are you trying to predict a particular synoptic system? Are you trying to predict whether or not it will be warmer than average or not? These are vastly different weather prediction problems which require different approaches.
> And Bill Gates thought 64K should be enough for anybody. Do you really think computers will only have a few GB of memory 50 years from now?
Modern weather and climate modeling is already a tera- or peta-scale endeavor, depending on exactly what one is trying to do. The sorts of simulations alluded to in the OP push into the exascale.
As other commenters have noted, your odd choice of femotometer (10^-15 meters) would lead to memory requirements larger than the number of atoms in the real atmosphere.
> This straw man does not exactly demonstrate that conventional weather and climate modeling is being abandoned anytime soon. If the unconventional private investments aren't profitable, the market will deal with them.
Of course it does. The age of heterogeneous compute for weather/climate models is just beginning, yet you do not see NVIDIA optimizing NWP systems to run on GPUs or Google porting them to run on TPUs, do you? Instead, you see these organizations pursuing AI/DL, while core NWP development is limited to federal research labs and agencies, but they are increasingly struggling to attract developer and research scientist talent to pursue these activities.
This is a very real challenge that is frequently talked about within the weather community in the United States. I'd hazard the guess that you are not a member of this community?
> much like the local weather, impossible to predict with any accuracy years into the future, and yet the tools used to measure it are consistently getting more accurate, cheaper and smaller. Maybe like bottle-openers, weather sensors may superfluously start appear on everything. The more widespread the measurements, the more data descibing initial conditions, the better the forecast will be at any interval.
There is virtually no data assimilation technology to support the ingestion of the vast majority these data, and we do not even run weather models with suitable configurations to take advantage of them if we had the DA support in the first place. And, as I've mentioned repeatedly, not every measurement leads to an improvement in forecast quality. This is simply _not_ the low- or even high-hanging fruit regarding improvements to weather forecast quality and impact.
I've worked in this exact domain of developing novel weather sensing and observation systems and leveraging them to try to improve forecast quality - across federally-funded research and more than one private company over the past ten years - and it's mostly a fools errand. If one wants to develop improved, impactful, useful weather forecasts, this is not the path to pursue.