Try a search on 'Savart Journal'; that should get you there. They're not trying to keep it secret.
That was what I'd call a minimally interesting video. I know he put a lot of work into it, and I respect that, but the fact is that his controls were not nearly good enough to isolate anything interesting.
Hand strumming is just not accurate enough, for example: the best way to get a repeatable 'pluck' is the wire break method. Loop a piece of magnet wire, about #44, behind the string and pull it up until the wire breaks. This gives a force that's repeatable to within about 2%, and according one expert the signal from the string will be the same within 1dB. It would be nice to get closer than that, but so far that's the best I've heard of.
He makes a lot of the fact that the strings have been on for a week, If they were not played in that time that should not matter for this sort of test, I would think.
He saw changes in the high frequency region: the measurements I've done suggest that it's mostly the low end that changes. In normalizing to the noise floor at the low end he may have masked any such change. Do we know that the low-frequency noise was at the same level in both tests?
The problem with high frequency stuff is that any normal room will have a huge number of standing waves in that range, and, since the wave lengths are so short the amplitude will vary significantly over a few inches. He did control the mic and guitar positions fairly well, but if something in the room was changed, like a piece of furniture was moved even a little, it could alter the response enough to move peaks in the spectrum chart. That could have been enough to account for that 17Hz peak in the 'after' test: the wave length there is under an inch, so moving the guitar by a half inch could do it. That's one reason they use anechoic chambers for this sort of test, and clamp the guitar down.
It would have been more interesting, and useful, if he'd done the playing comparisons without telling us which was which, and had the viewers guess. Small objective changes can be swamped by bias: heck, even fairly large ones can!
It's not easy to do good science about this stuff, which is why so little gets done. The people who have the wherewithal are not interested, and those that are interested can't afford to do the tests.