Discussion about this post

User's avatar
John Day MD's avatar

Thank You, Anastassia for passing over this essential content from so many different approaches, that there is effectively some way that you are saying it for anybody who really wants to understand.

Andrew Kazantsev's avatar

Can anyone explain this paradox? https://doi.org/10.13140/RG.2.2.36466.82885 (page 6)

An attempt to check satellite data on precipitation leads to a dead end - if we calculate the sum of integral water content for all heights and estimate it by the average period of moisture circulation in the atmosphere (7-10 days), we get precipitation value of several mm (here 2.96 mm per year), while in reality it is about meters (510 mm per year for Ashdod). I.e. 2-3 orders of magnitude of moisture is lost somewhere! The satellite flew over this area (~ degree by degree) 44 times in 2008, and I take 100 points for each flyby, i.e. a sample of ~4400 points. From them I calculate the average cloud water content for each layer (240 m height). Accordingly, I get some quasi-stationary average water content, summarize it and try to calculate what the precipitation rate will be, assuming that this water content falls as rain about once every 7-10 days (according to the approximate period of the hydrological cycle). There seems to be no logical error, but the result is discouraging. Obviously, real satellite data on cloud water content gives a paradoxically low value for the same real precipitation data! So, how to calculate precipitation correctly from integral cloud water content? I have asked this question on my blog, to well-known climatologists, and even on the NASA forum, but I have not received an answer from anyone.

14 more comments...

No posts

Ready for more?