I apologize in advance for how I'm going to describe my problem. I'm tripping over the logic of how to do this, and it is leading to general confusion.
Essentially I have one large table that is a list of depths from various drill sites.
Something like:
ID .... Depth .... Azimuth
00001 0 .......... 2
00001 100 ....... 5
00001 200 ....... 3
00001 500 ....... 70
00001 700 ....... 85
00001 900 ....... 89
00001 1100 ..... 88
00002 0 .......... 2
00002 150 ....... 5
00002 275 ....... 4
00002 500 ...... 60
00002 750 ...... 86
00002 1000 .... 87
00002 1150 .... 86
What I would like to do with this data is two fold. First, I'd like to group all the data with the same ID. Once grouped, I would like to take the average of the depth values for each ID. The catch is, I only want to average depth values once the azimuth is greater than 80.
I've been trying to do this in sections. With my first goal to simply group the data, however google searches on this keep turning up people simply saying not to do it as it "denormalizes": the data. Not doing it really isn't an option though. I can't change the way the data is being presented to me, so to average it requires grouping by ID in some form or another. Unfortunately, the functions I have found that would do this simply dump all the depth values into a field with some sort of delimiter. That makes it very difficult to average the data, as now I would have to have a function that parses the values out of the field. However, it also divorces the depth from the azimuth, making it impossible to only average the values that meet my criteria.
This got me thinking that I need to apply the filter to remove the depths before the azimuth hits 80 at the same time I group by ID.
About that time my brain turns to mush and I have a hard time keeping everything straight.
I'm at a loss where to start with this and I would really appreciate assistance.
Some info that may help. All field types used are double data types.