The article of Jeff Sonas is from 2009 and I think that things changed since fide reduced the rating floor.
The Sonas article on chessbase is from 2009, but he does address the impact on overall ratings of pre-2009 changes to the ratings floor - about half way down the article (just before his conclusions). Here is a quote:
Finally, I would like to explain my current theory as to why there is inflation. I first heard this explanation in Athens from Nick Faulks, and I see no flaw in it. Here is how the argument goes
There was originally a very high rating floor. Over time it has gone lower and lower, but for a while it was 2200. This meant if your rating was calculated to be 2200+, then you would show up on the FIDE list, but if your rating was calculated to be below 2200, then you would completely disappear from it. That's why for a long time there were no men rated below 2200 (the women had a lower rating floor initially, I think). You can clearly see the impact of the rating floor of 2200 and then (later) 2000 in this stacked area graph which indicates the overall distribution of players across time:
Now let's think about how it was back when the rating floor was 2200. Consider a hypothetical group of active players, all of whom have a performance rating of 2000 across all their games. Some of those players will certainly outperform their true 2000-strength for a short time, and others will underperform. Only those players from our group that outperform their true strength will make it onto the rating list, whereas the players who underperform will not be anywhere on the list. This means the players who show up on the rating list just above the rating floor, are (as a group) significantly overrated, just waiting to donate rating points to the rest of the pool. Even worse, while these overrated players keep temporary possession of their 2200+ ratings, other players may also receive inflated initial ratings as well, based partially on games against the overrated players. Over time, the overrated players will do worse than their ratings suggest, and their excess rating points will ultimately be distributed throughout the entire rating pool.
If this argument were true, you would expect to see that provisional players (i.e. those players who have not yet played 30 games) on average are actually losing rating points during their time as provisional players. And in fact this is what the data does appear to show. Although you would think that newer players are still improving and would in fact gain points on average, it seems clear that provisional players are actually being overrated. This needs more investigation, and I still don't fully understand why the inflation rate has changed so much over time.
[...]
It would be very interesting if Sonas were to publish an update to the older article - with new data and any changes in his ideas.
In the past new players who got slightly above the rating floor and lost were simply lucky and continued to lose rating points when fide reduced the rating floor.
I doubt if today most of the players who get rating slightly above 1000 are going to lose rating points and I think that even if most of them are lucky to be slightly above the rating floor they learn and improve.
today there are 54 players with rating below 1050 and it may be interesting to follow them to see if they lose or win more rating points in the next lists.
[...]
Of course FIDE already provides a good indication of this, if you can trust extrapolation (always a bit more dodgy than interpolation), by tracking the performance of each member, and providing public access to it.
For example, here is FIDE data for the first player in your list:
ProfileRatings Progress (you probably want to look here)
Game StatisticsFull ReportFor a USCF perspective, I found the following link. It's hot off the press (Aug 2013), but very detailed (what do you expect from Glickman?):
USCF Ratings Committee Report -- August 2013