Jump to content

Velocity plateaus in load testing: Why?


maximus otter

Recommended Posts

But if the answer is incorrect, the money is wasted nonetheless...

 

As an aside, it's worth noting for a moment the wealth of knowledge and skill made freely available on this forum. We have some awesome gunsmiths in the likes of Baldie, Ronin , Bradders et alia, excellent shots too, people like Laurie who contribute decades of hard learned lessons and experience. Not everyone agrees with everyone else all the time, but that process of challenge and questioning is how we progress. 

 

Link to comment
Share on other sites

  • Replies 96
  • Created
  • Last Reply
1 hour ago, MJR said:

All true and I think Saterlee explains somewhere that he has previously done testing with more rounds but cut it to two at each charge increment for simple economy. No not statistically the thing to do but lighter on the pocket, quicker and less barrel erosion.

yes but possibly meaningless

Link to comment
Share on other sites

23 hours ago, MJR said:

No idea😁

Beat me to it lol

Link to comment
Share on other sites

Everyone has their own methods, and the satterlee one is nothing new.

I've always looked for the velocity plateau's in a load. They give you a nice wide load, which shouldn't shift either way in extremes of heat or cold. Dont pick the knife edge ones.

Being a yorkshireman, it also tells you when to stop wasting powder.😂

 

 

Link to comment
Share on other sites

12 hours ago, meles meles said:

We have some tekkernickel knowledge about large bore rifles and how to tune those for extreme accuracy (i.e. choosing which rivet to hit on a tank a few French miles away) but as has previously been stated in this thread, once one comes down to small bore arms then things get exceedingly non-linear with the slightest fluctuations / changes in apparently almost irrelevant factors showing that they do indeed have an effect and ought to be considered.  As an example, thermal expansion of the barrel and a change in its internal diameter will influence accuracy within a string of shots. How important that is depends upon how good a shooter you are: other errors may mask it but it doesn't go away and can combine with another minor factor to suddenly create a significant effect. Once you start to try and optimise accuracy to the degree some people here desire, then you really are well into the field of statistics: i.e. is what you measured actually what you think you measured and relevant, or something different entirely that you aren't even aware of as being important? No two shots will ever be the same. Two shots in and of themselves are not usually statistically relevant.

In the search for the 'velocity plateau', one would be better firing ten shots at each of many, many charge increments, with an exceptionally accurate chronometer, cleaning the barrel to an identical regimen between each shot. Whilst you are doing that, a team of analytical chemists and metallurgists can be preparing each individual round of ammunition for you using lab quality equipment and producing an error audit. Meanwhile, a team of naturalists can go out and talk nicely to all the flutterbies for miles around and ask them politely not to flap their wings for the duration of the experiment. Let's assume that each round of ammunition is produced with a combined error audit of less than 0.1% in weight, length, concentricity and chemical stability, each shot is fired in a barrel whose surface roughness varies by less than 1 micron, along its entire length, for the whole series of , say, 1,000 shots, the temperature on the range varies by less than 0.1 degree Celsius over the same period, the wind speed changes by less than 0.01 m/s, wind direction shifts by no more than 1 degree. Your results, at each 0.1 grain charge increment, show a velocity variation of 3 fps with a standard deviation of 0.2 fps.  Mathematically, have you actually shown anything or is it just random variation ?

 

Written with amusing irony, but that about nails my view. I simply struggle to accept, given the huge error budget in small arms internal ballistics, how a pressure driven (by a half grain change) theoretical change to propellant chemical burn rate characteristics being postulated here -which would be absolutely miniscule in comparison to the scale of all the other factors - would be empirically observable.  Any non linear observation between increasing charge and increasing vel could be down to - well, just about anything. And with the sample sizes being shown, I suspect we're just seeing single points in the normal distribution of vels at each charge weight - and taking them as centre points.

I've not seen a graph plot yet that I would accept wasn't showing random variance. With a few points randomly aligning to show what the viewer's brain wants to see. 

Has anyone here repeated this process on discrete occasions and been able to reproduce results?

 

[As a melancholic aside: Just occurred to me that GBal would have loved this thread and been all over it!]

Link to comment
Share on other sites

37 minutes ago, brown dog said:

As a melancholic aside: Just occurred to me that GBal would have loved this thread and been all over it!]

Absolutely! 

I have been thinking about using this method for a new rifle but do have my doubts too.

I haven't seen mentioned the need for fireformed brass. Surely this would be a requirement as the rest of the reloading process needs to be so meticulous?

It's well known that new brass it likely to give greater SD and ES and therefore mask the results. Or would it only improve the same findings on repeating with the 1x fired brass?

 

Link to comment
Share on other sites

1 hour ago, brown dog said:

Has anyone here repeated this process on discrete occasions and been able to reproduce results?

No not yet. But I will be doing shortly. As you describe I suspected random variance of results as a result of actual fluctuation in charge weight accuracy so I am addressing that and will then repeat the test. I am expecting to see a slight shift in the node but hopefully broadly similar result. Time will tell. I doubt I will have the same environment to shoot in so that too will be an influence on results.

Link to comment
Share on other sites

1 hour ago, MJR said:

No not yet. But I will be doing shortly. As you describe I suspected random variance of results as a result of actual fluctuation in charge weight accuracy so I am addressing that and will then repeat the test. I am expecting to see a slight shift in the node but hopefully broadly similar result. Time will tell. I doubt I will have the same environment to shoot in so that too will be an influence on results.

If we really think we're empirically measuring deflagration characteristics being chemically changed due to tiny pressure changes that the charge doesn't 'know' it's experiencing until the pressure peak is reached -    I'd have thought changed charge temp (a much bigger burn rate effector, to my brain) will likely throw the whole experiment for a ball of chalk! It'll be interesting to see the outcome 😊

Link to comment
Share on other sites

I have used Slattersly's method a couple of times each on a couple of different loads. One was a waste of time (?) -  there was no "flat" spot. The other did show a "trend" that he described and I got a very good load. But how do I interpret that ?? Is it great or just another crock ? Or try another powder ?

The velocity trends were vaguely similar but it was hardly convincing. Don't have the graphs anymore, thats how weak they were. I also suspect the good load I got was because the whole system just shot well and I could have picked any load within reason and would have got good results.

The approach is just not statistically valid for one thing and the "other" variables outside my ken or control seem to introduce meaningful non-linear effects. Combine that with other real world effects in extremes of conditions and I am becoming a sceptic on much of the "scientific" approach to this sport or at least the ranges in which it is truly applicable.

Link to comment
Share on other sites

1 hour ago, trucraft said:

Absolutely! 

I have been thinking about using this method for a new rifle but do have my doubts too.

I haven't seen mentioned the need for fireformed brass. Surely this would be a requirement as the rest of the reloading process needs to be so meticulous?

It's well known that new brass it likely to give greater SD and ES and therefore mask the results. Or would it only improve the same findings on repeating with the 1x fired brass?

 

In the original article (6.5 Guys) they do mention the usual standardisation techniques.  But that still leaves the methodology highly suspect.  As earlier posts point out how on earth do they expect to control temperature variations?

It's all a bit like an image of Jesus seen on a slice of toast;  if we go looking for patterns in otherwise normal distributions of data we will see them because we're made that way.  Smoke and mirrors

Link to comment
Share on other sites

We all look for acharge weight that gives awide window, then generally we fiddle with neck tension, primers, seating depth,  to optimise the load. And then probably recheck the powder charge at the end of it all to be sure.

Given that some many factors have been optimised is it not reasonable to think that variances in any one are smoothed by the rest being at their best?

In respect of quickload, i’ve found that if you, alter the default settings for temperature on the day , the tweak the powder burn rate to match your chrono results on the day, make sure the cartridge and bullet data are all correct, it gives some incredibly accurate predictions. Combine it with “optimal barrel timing” and it gets me pretty close to a good load quicker than i would normally.

A benefit for me as its a 200 mile round trip to do some testing and anything that cuts the number of sessions down is a bonus.

Link to comment
Share on other sites

  • 4 weeks later...
On 3/11/2019 at 7:36 AM, brown dog said:

Written with amusing irony, but that about nails my view. I simply struggle to accept, given the huge error budget in small arms internal ballistics, how a pressure driven (by a half grain change) theoretical change to propellant chemical burn rate characteristics being postulated here -which would be absolutely miniscule in comparison to the scale of all the other factors - would be empirically observable.  Any non linear observation between increasing charge and increasing vel could be down to - well, just about anything. And with the sample sizes being shown, I suspect we're just seeing single points in the normal distribution of vels at each charge weight - and taking them as centre points.

I've not seen a graph plot yet that I would accept wasn't showing random variance. With a few points randomly aligning to show what the viewer's brain wants to see. 

Has anyone here repeated this process on discrete occasions and been able to reproduce results?

 

[As a melancholic aside: Just occurred to me that GBal would have loved this thread and been all over it!]

I have done something similar for several of my loads, comparing Satterlee with my own 5 shot OBT/OCW-type approach.

I picked a bullet that I've struggled with for a long time to get consistent results from for whatever reason (60 Vmax) for my .223 and also one where load dev was straight forward (6.5/139 Scenar) and in each case:

1.  started at 6% under max load and worked up in 1% intervals, shooting 5 at each charge;

2.  Recorded MV/ES/SD each time

3.  repeated this for different temperatures (about 5 degrees different from 8 Celcius to 22 Celcius, so repeated 3 times to jusdge temp sensitivity).

4.  Repeated for two powder batches where similar charges gave simnilar MVs.

5.  repeated using a 10 shot Saterlee latter after meticulous brass prep

Just recently, I compared my 223 loads from last year to ones for the exact same charge this year but also looked at the charge to velocity curves.  The 223 correlated very well indeed, and last year a fluke ES of zero(!) for a 5 shot group resulted in an MV at the same temp within 10fps of that same load tested just a week ago in slightly cooler weather (which one would expect).  What was also there was a very close correlation over a 0.4grn range of a virtually flat velocity plateau (within 5fps) where my  chosen load was smack in the middle (23.8gr N133).  That data correlated very well indeed to several 10 shot Satterlee tests I'd also undertaken although in both cases, the flat spot centre wasn't exactly the same as the OBT centre....I trust the latter more.

Out of interest, I looked at 139grn 6.5 scenar data over the past two years...I seem to have amassed considerable field data now for this bullet and RS62.   All but two of my field trips for load dev showed remarkable correlation with velocity/charge weight on the charge/MV curve in terms of where velocity plateaus fell be that 1 shot/charge mass increments, 3shot or 5 shot groups.  The velocity insensitive median point for all but two of the data sets fell at 43.8grns and the average at 44grns.  That's how close it was.  There were minor variations in comparative MVs but this was easily explained by temperature variations.

I returned to my first two data sets for this powder/bullet combination as I was puzzled as to why MVs and the plateaus didn't correlate well at all and were considerably lower than 4 subsequent data sets, and only when I looked at the dates and the round count for the barrel was that puzzle solved.  These data sets were both taken within the first 200 rounds fired from a brand new rifle.  The subsequent data sets were all from there to about 1000 rounds.

I remain very confident, that with applied discipline to case prep,  you can see clear velocity plateaus for pressure insensitive nodes but they are not all especially consistent in breadth.  The .223, as one might expect, shows a much narrower plateau (due to way lower case capacity and possibly different ignition characteristics...) than the 6.5 or .308.  The largest plateau was with the .308/155grn bullet/RS50.  It shot within an ES of about 20 between 44.4 and 45 gr RS50 in my heavy barrelled T3 (I will check those last figures but pretty sure it was something like that).  One observation with all of these observed loads is that most of my chosen loads are approaching a full case.  The other consideration is breadth variations are almost certainly down to other variations be they random or not which sort of ties in with your point.

There are caveats and misgivings though about being over reliant on methods like satterlee.  My biggest reservation is sample size.  Whilst I have proved to myself, with enough data sets, the such plateaus exist, there are two areas where data is easily misinterpreted.  Firstly at charges where you simply are not approaching 100% burn rate due to barrel length (an obvious one but nonetheless missed sometimes) and where ANYTHING changes eg primers and even batch to batch variations of powder.  The most obvious though is tying down a single 10 shot ladder with any confidence and since only one of my loads, despite careful prep, has been repeatable that way, I'll stick with more statistically relevant methods.

I have tried single shot per charge satterlee  ladders but only ever get repeatable results with .223/69gr TMK/RS50.  I have repeated this ladder several times with near identical results give or take a few fps in similar conditions and have tried the middle of three plateaus.  To my disappointment, it didn't group well and ES/SD were nothing special!!! The higher nodal load, though did quite well.

I still remain interested in this topic since in fairness, we debate these things ad infinitum but perhaps rarely in sufficient detail, or depth as for the most part, we skirt over the more empirical side of things.  I would be interested in learning more and will keep an open mind, but cannot make any special claims from my own observations save they are what they are.

Link to comment
Share on other sites

Hi VarmLR, very interesting info, out of interest, how do you measure powder charge and what tolerance is your equipment ?

Link to comment
Share on other sites

2 hours ago, TJC said:

Scott Satterlee is doing a course in the U.K....

https://m.facebook.com/roundhousefirearmstraining/

Can't deny his shooting prowess or his service - top man indeed.  Not sure though he has any scientific or engineering credentials to convince me his method isn't statistically shaky.

Link to comment
Share on other sites

On 3/11/2019 at 7:36 AM, brown dog said:

Written with amusing irony, but that about nails my view. I simply struggle to accept, given the huge error budget in small arms internal ballistics, how a pressure driven (by a half grain change) theoretical change to propellant chemical burn rate characteristics being postulated here -which would be absolutely miniscule in comparison to the scale of all the other factors - would be empirically observable.  Any non linear observation between increasing charge and increasing vel could be down to - well, just about anything. And with the sample sizes being shown, I suspect we're just seeing single points in the normal distribution of vels at each charge weight - and taking them as centre points.

I've not seen a graph plot yet that I would accept wasn't showing random variance. With a few points randomly aligning to show what the viewer's brain wants to see. 

Has anyone here repeated this process on discrete occasions and been able to reproduce results?

 

[As a melancholic aside: Just occurred to me that GBal would have loved this thread and been all over it!]

Steven Fry is at it again 😁

Link to comment
Share on other sites

1 hour ago, Popsbengo said:

Hi VarmLR, very interesting info, out of interest, how do you measure powder charge and what tolerance is your equipment ?

I use a modified (shooting shed insert for more consistent charges) Lyman gen 6 and check every 5th or 6th charge against some additional calibrated scales.  I'm not overly bothered about anything more than 0.1grn (ie +/- 0.05) and that's what I'm probably getting close to.  Plateaus for the 6.5 and .308 are considerably more so tighter tolerance just isn't needed imho.  Slight case volume variations have more effect at a guess....most cartridge makers seem to charge by volume, not by mass.

Link to comment
Share on other sites

39 minutes ago, VarmLR said:

I use a modified (shooting shed insert for more consistent charges) Lyman gen 6 and check every 5th or 6th charge against some additional calibrated scales.  I'm not overly bothered about anything more than 0.1grn (ie +/- 0.05) and that's what I'm probably getting close to.  Plateaus for the 6.5 and .308 are considerably more so tighter tolerance just isn't needed imho.  Slight case volume variations have more effect at a guess....most cartridge makers seem to charge by volume, not by mass.

Lyman Gen6 has an accuracy of +/- 0.1gn as you say so about 0.5% in round numbers.  The velocity you measure can't be more significant than that 0.5% due to that variability  so you have a built in error-bar of +/-15 fps on 3000fps (representative MV).  Your chronology will also have an error-bar - Magnetospeed is +/- 0.5% tolerance so that's 15fps on 300fps.  Total possible error +/- 30fps for identical measured loads.  There goes that plateau..

Regarding manufacturing (I have a little professional knowledge here):  Volume's easier to automate.  Mass measurement on the fly, to really accurate tolerances, is very expensive (eg pharmaceutical manufacturing equipment). Measuring volume is just a calibrated void with some sort of vibration settling and level adjust - easy and reasonably stable.  The only problem is powders are hygroscopic and therefore mass to volume fluctuates requiring either slack tolerances or stabilised environments.   Measuring mass beats volume everyday for precision however we can't get great accuracy between batches due to environmental changes.  Just another of those pesky variables....

Link to comment
Share on other sites

You can only work with what you have at the end of the day.  I don't lose sleep when my ES figures aren't quite into single figures, precisely because of the variability in measurement, chronys etc.  They are only tools.  I don't buy the total 30fps in your example Pops' as an absolute mainly because the larger the sample size, the more statistically relevant and representative the results , the more correlation you can do with observed performance (target data) and you really have to use the two together to reach meaningful conclusions.  Looking at one half of that equation in isolation lacks rigour.  I guess that once you've shot enough within identified areas of low sensitivity, and found a pretty good median point from lots of observed data you can settle on that load and no amount of accuracy in measurement gets it better than that for simple reasons of things like batch to batch variations.  In essence, I guess some comp shooters never really stop refining load data between batches because of these variations....or do they?  

RS, for example, bases a safety margin of plus or minus a whopping 10% batch variation on burn rate when using QL to determine maximum recommended safe loadings.  I'm not sure of what the actual batch to batch variations might be but so far, I've seen pretty decent consistency from their powders.

I sometimes wonder if we get too concerned and carried away with optimum charge when the reality is that we have no control over batch variations or errors in measurement kt.  Where we can make a difference is on case prep, run-out, consistency and batching of projectiles, cases and experimentation with primers.  The latter is an interesting one because it throws another fly into the ointment.  Several times I've been caught out when load developing and though that a specific bullet simply wont shoot from my barrel....high ES/SD, very average groups etc.  A simple change of primer has tightened things up nicely.

So much to learn, so little time to learn it in!  That's not exclusive  to shooting though...

Link to comment
Share on other sites

1 hour ago, VarmLR said:

I don't buy the total 30fps in your example Pops' as an absolute mainly because the larger the sample size, the more statistically relevant and representative the results , the more correlation you can do with observed performance (target data) and you really have to use the two together to reach meaningful conclusions. 

I agree that taking many samples will 'tend towards the mean' however we were originally talking about the Satterlee method which precludes that as it proposes a (very) limited sequence so I think my 30fps error bar is fair.

Link to comment
Share on other sites

I use the gempro250 scales.so say they can weight too 1/200 of a grain = 1 kernal of N165 . I weigh my charges to within that and I get 3 shots with a ES of 5fps then one that makes double figures,sometimes like yesterday I had one that sent my ES to 26fps that i was expecting single figure and I've no I deer why. That isn't the usual though.

Link to comment
Share on other sites

1 hour ago, No i deer said:

I use the gempro250 scales.so say they can weight too 1/200 of a grain = 1 kernal of N165 . I weigh my charges to within that and I get 3 shots with a ES of 5fps then one that makes double figures,sometimes like yesterday I had one that sent my ES to 26fps that i was expecting single figure and I've no I deer why. That isn't the usual though.

dang those variables !  Do you weigh, measure and select your primers? Not an ideal way to batch but I can't think of a better one.  One extra puff of energy and there goes that ES.

Link to comment
Share on other sites

3 hours ago, Popsbengo said:

I agree that taking many samples will 'tend towards the mean' however we were originally talking about the Satterlee method which precludes that as it proposes a (very) limited sequence so I think my 30fps error bar is fair.

Yes, as far as Satterlee goes, hence my original point that for most of us, it is simply not representative of what can be achieved given the variables...the whole point is that a more representative sample IS needed.  Satterlee himself gives a nod to this in his caveats pointing out that everything from brass prep, to charge mass to measurement must be highly disciplined but unless your measurement is better than 0.5% you're on a hiding to nothing unless you fiddle in a fudge factor which sort of negates the whole point of the process.  That's why I have serious doubts as to its validity on single shot ladders.

Link to comment
Share on other sites

I bought this method up on another thread I've tried it on my 243 only to find it matched the same load  I already developed using OCW be interesting to see if any one else try's  reverse engineering analready accurate load using this method.

 

Now I'm going to try it with my 308 stalking rifle the  challenges I need the perfect cold ball load. I will also be using it with the new 300wsm open rifle were im looking for many more shots  obviously,I will then use OCW to check the results.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.


blackrifle.png

jr_firearms_200.gif

valkyrie 200.jpg

tab 200.jpg

Northallerton NSAC shooting.jpg

RifleMags_200x100.jpg

dolphin button4 (200x100).jpg

CASEPREP_FINAL_YELLOW_hi_res__200_.jpg

rovicom200.jpg

Lumensmini.png

CALTON MOOR RANGE (2) (200x135).jpg

bradley1 200.jpg

IMG-20230320-WA0011.jpg

NVstore200.jpg



×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use and Privacy Policy