At the white arrow

Download Report

Transcript At the white arrow

A summary supporting use of time domain optimization in inversion and noise removal Before you start criticizing details on my
strike slip fault interpretation let me say
you probably would never have even
known any were there before my system
brought them out. Further, until you take
the time to consider the points I make in
this show, you probably aren’t qualified to
interpret them. In any case, perfection is
not needed to make the point.
Some background and a few basics How my inversion works – and why it is better -
Since strike slip movement is horizontal,
often no vertical throw will be seen on a
section. In addition, pre-stack migration
has blurred all breaks. Thus Traditional
fault picking guidelines don’t apply. This
is a simulated sonic log section. Input
traces were inverted and the spikes
integrated. The industry has ignored The
fact that such simulation of lithology is
needed to make sense out the jumble of
primary event amplitudes. The bright red
event now can be considered direct
reservoir evidence. So please follow the
click-able guide.
How added resolution makes parallel fault picking a possibility –
The noise – A classic case of shallow critical angle crossover.
Compendium menu and access instructions.
The return
here icon
On the seismic conflict between closed (proven) equation sets and statistical optimization.
To understand why non-linear optimization techniques have not yet won the seismic processing battle, one must consider
some history. About the time I wrote the first predictive deconvolution at Western Geo., a brilliant team from MIT came up with
a way to describe seismic events in terms of frequencies and phase. Until this, mathematicians had not been able to build
linear formula combos that would work in the time domain. This transformation allowed them to write closed equations to
attack seismic problems, and the process seemed so logical it took seismic R&D by storm. Most geoscientists consider
themselves mathematicians, and as time wore on the beauty of the time series concept won out over my time domain system.
After all – if it can be proved mathematically it must be true (they said), and parallel comparisons were (and are) rare.
However, unless I am mistaken, it should be noted that these all important transforms operate in the time domain.
I start with a several basics to set the stage The first of these is that the seismic energy continuum consists of thousands of independent primary reflections, each
coming from a single reflecting interface(like from the top and the bottom of a bed).. These individual primary reflections do not
mix in the subsurface. Earth filtering creates trailing lobes, the dominant frequency of those lobes decreasing with time.
Because of the .huge difference in to and fro travel time between traces, this filtering creates big differences in primary
reflection wave shape.
The second (but associated) fact is that this travel difference shifts the relationship between primaries. To illustrate
the importance of this, suppose the second lobe of the top primary is lined up with the first lobe of the bottom. The fact that
the two primaries have opposite polarity means that a lime might look like a gas sand. Of course all variations in between will
occur. In any case, the composite wavelet shape that emerges from each “ recorder stack” is going to vary all over the place.
And now a general statement that should be self evident. While a normal seismic section shows general structure, it does
not represent actual bed lithology – each event we see on the gathers is the result of an almost accidental, crude stack of
individual primary interface reflections as they arrive at the geophones. The final stack further confuses the wave shape
determination. These unfortunate realities explain why integration is an absolute necessity before we can trust calculated
attributes. They also cast severe doubt on AVO theory. The trouble is that true integration can only be done on spiked results.
My system is already there, but to spike in the frequency and phase domains would require the transforms to rather exactly
model the average wave shape. Because this is so difficult, these routines previously had to settle for just shortening spectra,
which is not good enough.
Currently, frequency / phase systems, in order to overcome the transform problem, are incorporating statistical optimization.
At least that puts us on a more level playing field, with emphasis shifting to my area of expertise, and it now becomes a
contest with higher goals. I discuss this further on the next topic.
Again, to emphasize the importance of integration, I show an example on the next slide.
Input stack
Simulated lithology
What happened here –
1. The system computed the
reflection coefficients (spikes)
on the input stacked trace (the
inversion).
2. It integrated them, did a low
frequency correction, and then
displayed the results.
This was a shale play. The upper detail is from
the top of the shale section. The interfaces are
weaker, but the system seemed to accurately
portray the thicknesses.
Follow the blue link below to show with dozens
of great well matches,
In this simple case, the input
spikes consisted of the blue
top, the blue bottom, the red
top and the red bottom plus
weaker deep ones. The vital
thing here is how it got the
thickness right.
The true power of statistical optimization was unknown to me until I started pushing it hard. In my original
predictive deconvolution I had used the autocorrelation extensively to get a guess at wave shape. Then, one day, working on
my new inversion, I decided to see if I could improve my initial wavelet guess. The fresh idea here was to use it to make a
convolution pass, record the spike guess timings and go back to use this information to compute a new wavelet. It soon
became evident that each new wavelet explained the trace energy better than the last. Thousands of hours later the system
was showing me the statistical power was there . My philosophy was to ignore the time it took and just concentrate on how
deep I could go. I must say I was continually surprised, (These are the kinds of things one can do when one does not answer
to anyone else). Of course I evolved tests that allowed the system to exit the loop if the improvement was not significant
The operating theory of my inversion is to keep making wavelet and spike position guesses until the original trace energy is
explained to the limit of system ability. The logic consists of 3 layers of iteration. The first (open ended) level loops through
consecutive wavelet guess runs, starting with an initial wavelet that is computed using autocorrelation like logic
The 2nd layered level loops through the selected set of stacked traces, and the 3rd through the per/trace optimization. Here,
at the third level, is where the coefficients are calculated (the same waveform being used for all). The system subtracts the
pertinent energy (associated with each spike) from the current working trace. If, at the end of the major loop no improvement
has been made, the system exits this phase of the operation. If improvement has been made, a new wavelet (see below) is
computed, and the original, untouched trace is loaded back into the work area. This “return to the original data” keeps the
system entirely honest about what it is doing.
To compute the new wavelet for the next major pass, the system moves through the previous spike guesses, adding data
from their effective spans to a summation vector.It then formalizes the new wavelet from this vector. Each guess is displayed
during the run, and watching the shape develop is an education in itself.
Obviously I developed some driving logic tricks to push the convergence but the system is as honest as it can get. It can
display the spiked output, and if you understand what you are looking at, it is impressive. I soon learned however that potential
users did not like the complexity of multiple interface interpretation so I went to the integration and soon became convinced
that it in itself was a major contribution. It still puzzles me that this truth does not seem to excite interpreters, but I feel the
same on the other major points.
This is what I mean by getting the best answer possible. Where formal mathematics have to give up, I can still
get a fair set of spikes. This can be looked at as the ability to shift the boundary error to one that statistics can handle. The
reason non –linear approaches like mine can beat the frequency/phase methods is they have the freedom needed to truly
harness the amazing power of statistical optimization. The ability to build the required logic into formulae that can be proven
mathematically is flashy, but solving the wavelet shape problem is the important key. Obviously the logic still has to be
mathematically sound, but perhaps there is a higher level that more effectively combines statistics into the fold.
Strike slip (parallel throw) faulting should be accepted as a geological fact.
Yet it went un-noticed for years because
seismic resolution was not good enough
for us to see the fault patterns.
I was the first in my knowledge circle to start picking
them. I met with much amusement on my ignorance
(and this was just a few years ago.) I say the reason I
began seeing them was because of the increased
resolution I was getting with my sonic log simulation.
The reason one should expect them all over the world is
that they accompany the tearing caused by continental drift
(and its deep plate movement). One cannot believe drift and
not accept the consequences.
They are hard to see because horizontal movement may
not create any vertical thow, and even if it does, stratigraphic
interval layering may change, making the visual spotting
even more difficult. Fault A, for example is upthrown (at the
right) at the top, and downthrown at the bottom.
Sonic log simulation provides better cross-fault correlation,
as well as emphasizing abrupt changes in character. Some
of these attributes are very subtle, and require visual study.
PowerPoint series covering a span of either in or cross lines
help the interpreter establish the overall pattern. Once that is
checked out, the minor details become quite obvious.
This is Gulf Coast data, where it has become obvious that
strike slip faults control reservoir boundaries. How important
does make all of this attention to detail?
Coherent noise - I start with a North Sea
gather example that started me on this theme of
refractions caused by critical angle crossings. On
the next slide I show the same kind of thing on the
gulf coast line used for this show.
At the red arrow you can see an early case of
4
an event cloning into a refraction (note the overcorrection indicating horizontal travel). Farther out
you see an explosion of energy. This was created
by the pre-stack migration logic that could not
handle refractions. The same gather data without
pre-stack migration showed no such problem.
Since there was little need for migration in the first
place, this is a heavy price that is commonly paid.
At the white arrow is the chalk horizon. The
same thing happens here, but in spades.
The reason this is so important should be
evident to any geoscientist. I am sure you have
noted it, but I will point it out anyhow. It is that
once the critical angle is crossed, all downward
energy is halted, and refraction noise takes over.
Here, at least half of the offsets contribute nothing
but trouble. I feel confident, from what I have
seen, that this phenomenon is quite common, all
over the world.
By the way, the trace at the immediate left is the stack.
The raw power of the stack to bring out the common denominator
continually impresses me. Of course that common property is the
correct NMO, but in cases like this it is a hopeless battle..
And here it is off the gulf coast.
The offending event is the red
producer seen on the first slide.
An extremely deep mute was
needed to get the quality you
see there.
Amplitude v.s. Offset –
We are here to discuss the coherent noise generated by this critical angle crossover. However I can’t
resist reminding all that AVO logic would have accidentally spotted the bright spot because of the impossibly high amplitudes
caused by the wave trap. To say their theory had anything to do with what we see here is nonsense. This phenomenon is
repeated over and over again in diverse areas, and that probably explains the success rate we hear about. Being a “black box”
advocate I can’t knock those results even if they come from ignorance of the facts. However I think the great stratigraphic detail
we got with our deep mute beats what they can do. Now, if we had the raw data needed to lift this observed noise off, we might
really see some astounding improvement in lithologic resolution (and that would be true in many other areas).
This is the router in Paige’s set of non-linear seismic thoughts.
If you were there, browsing through would be super fast and simple. To get there, see next slide.
Introduction – take a minute to see where I am coming from and why this might be worth your time.
Then look at some great well log matches to see the merits of non-linear optimization.
Or some seismic basics all interpreters should be aware of.
And now look at some sources of seismic noise
And a quick look at refractions spawned by critical angle crossing.
Now to the results of noise removal on a deep South Louisiana project.
Or back up to look at the system in action on this last one. Sit back and watch the timed slides.
Or here to the results from seemingly hopeless Permian basin data.
Or here to still another example of down wave truncation.
Or here to where I first identified strike slip faulting on a North Sea project.
Or here to a Gulf Coast strike slip fault example.
Or here where I discuss direct reservoir detection.
Or here for a different discussion of intertwined signal and noise.
Or here for a different twist on why ignored noise saved prospects for newcomers.
Or here for a more complete noise primer.
Or here for a comprehensive look at my inversion.
Or here for another look at well log matches.
Or here for a fairly sarcastic look at near/middle/far stack options and a wrap-up.
I have spent a good bit of time collecting PowerPoints into a folder, which I
have sent to my FTP site. If you are interested, You have to do the
following to access the work.
1. Enter " adaps.exavault.com " in your browser and go there. The
username is adaps and the password is adaps1 ..
2. Select the folder PN and "download all". It sends a zipped file.
Create a new folder on your PC named PN, unzip and load the two files
(shows and base.ppsx) into PN.
3 access "base.ppsx" and you will get the router which will lead you to all
the others. This eliminates the load time problem.
At this fairly late time I have no idea if anyone is accessing the shows the
way I had hoped. Just going into the list blindly is not as productive, since
the menu puts them into a more reasonable form. Communication can be
lonely.
Thanks in advance
Dave Paige