Introduction to WOOL SPINNING

Download Report

Transcript Introduction to WOOL SPINNING

TECHNICAL UNIVERSITY OF LIBEREC
Faculty of Textile engineering
Department of Textile Technology
YARN HAIRINESS
COMPLEX CHARACTERIZATION
Jiří Militký, Sayed Ibrahim&
Dana Kremenakova
7-8 May 2009, Lahore, Pakistan
Introduction
Hairiness is considered as sum of the fibre ends
and loops standing out from the main compact yarn
body
The most popular instrument is the Uster
hairiness system, which characterizes the hairiness
by H value, and is defined as the total length of all
hairs within one centimetre of yarn.
Uster Hairiness
Uster tester
The system introduced by Zweigle, counts the
number of hairs of defined lengths. The S3 gives the
number of hairs of 3mm and longer.
The information obtained from both systems are
limited, and the available methods either compress
the data into a single vale H or S3, convert the entire
data set into a spectrogram deleting the important
spatial information.
Some laboratory systems dealing with image
processing, decomposing the hairiness function into
two exponential functions (Neckar,s Model), time
consuming, dealing with very short lengths.
Zweigle Hairiness
Principle of Different
Spinning Systems
Ring-Compact Spinning Principle of Siro
OE Spinning
Vortex Spinning
Outline
Investigation the possibility of approximation yarn hairiness
distribution as a mixture of two Gaussian distributions.

Techniques for basic assumptions about hairiness variation
curve from USTER Tester

Solution of inverse problem i.e. specification
characteristics of the underlying stochastic process.

Description of HYARN
hairiness data

of
the
program for complex analysis of
1st Part bimodality
USTER hairiness diagram
The signal from hairiness measurement between distances di is
equal to the overall length of fibers protruded from the body of
yarn on the length  = 1 cm.
H (i)  H (di ) , di = i * , i  0...N 1
This signal is expressed in the form of the hairiness diagram
(HD).
The raw data from Uster tester 4 were extracted and converted
to individual readings corresponding to yarn hairiness, i.e. the
mean value of total hair length per unit length (centimeter).
Experimental Part and
Method of Evaluation
More than 75 cotton different yarns (14.5-30 tex) of different counts were
spun on different spinning systems, namely ring, compact, Siro-spun, Openend spinning, plied yarns, and vortex yarn of count 20 tex, spun from
viscose fibers. All of these yarns were tested against yarn hairiness.
Histogram (83 columns)
Hair Diagram
Normal Dist. fit
10
8
6
4
2
0
100
200
300
400
2
3
4
5
6
7
8
9
10 11 12
2
3
4
5
Distance
0,3
OE Yarn
Nonparametric Density
hair length
12
0,2
0,1
0
2
3
4
5
6
7
8
9
10
11
12
Yarn Hairines s
Gaussian curve fit (20 columns)
Smooth curve fit
6
7
8
9 10 11 12
Basics of Probability Density Function
Optimising Number of Pins
•The area of a column in a histogram represents a piecewise constant estimator of sample
probability density. Its height is estimated by:
CN (t j 1, t j )
f H ( x) 
N hj
Where
CN (t j 1, t j )
is the number of sample
elements in this interval
and
h = h j (t j  t j 1)
is the length
of this interval.
Rq  upper quartile  lower quartile
Rq  x(0.75)  x(0.25)
Number of classes
M  int[2.46 (N-1)0.4 ]
For all samples is
N= 18458
h  3.49*(min(s, Rq) /1.34) / n1/3
and
M=125
and
h = 0.133
Kernel Density Function
The Kernel type nonparametric (free function) of
sample probability density function
1
fˆ ( x) 
N
N
 x  xi 

h


K 
i 1

K x : bi-quadratic
Kernel function
symmetric around zero, has the same
properties as PDF
Optimal bandwidth : h
1. Based on the assumptions of near normality
2. Adaptive smoothing
3. Exploratory (local hj ) requirement of equal
probability in all classes
h  0.9*(min(s, Rq) /1.34) / n1/5
h = 0.1278
Bi-modal distribution
Two Gaussian Distribution
The bi-modal distribution can be approximated by two
Gaussian distributions:
2
 ( xi  B1 
 ( xi  B2 
fG ( xi )  A1*exp  
 A2*exp  


C
1
C
2




A1
A 2 are proportions of shorter and longer
respectively, B1 , B 2 are the means and C1 , C 2
Where
,
2
hair distribution
are the standard
deviations.
H-yarn Program written in Matlab code, using the least square method is
used for estimating these parameters. these parameters.
Analysis of Results
Significance of Bi-Modality Distribution
Bimodality Parametric Tests:
 Mixture of distributions
estimation and likelihood
ratio test
 Test of significant distance
between modes (Separation)
Bimodality Nonparametric Tests:
 kernel density (Silverman test)
 CDF (DIP, Kolmogorov tests)
 Rankit plot
Mixture of two Gaussian Distributions
Mixture of two distributions does not necessarily result always in a
bimodal distribution.
0.5
0.4
0.3
0.2
0.1
0
0
2
4
6
8
Dip Test I
Points A and B are modes, shaded
areas C,D are bumps, area E is the
dip and F is a shoulder point
Dip test statistics:
It is the largest vertical
difference between the empirical
cumulative distribution FE and
the Uniform distribution FU
Analysis of Results II
Mixture of Gauss distributions
CDF plots
Probability density function (PDF) f (x),
Cumulative Distribution Function (CDF)
F (x), and Empirical CDF (ECDF) Fn(x)
Uni-modal CDF: is convex in (−∞, m), and
concave in [m, ∞)
Bimodal CDF: one bump
Let Gp = arg min supx |Fn(x) − G(x)|,
Where G(x) is a unimode CDF.
Dip Statistic: d = supx |Fn(x) − Gp(x)|
1
rectangular
empirical
normal
0.8
0.6
0.4
0.2
0
1
2
3
4
5
6
7
8
4.5
5
CDF plots
0.5
0.4
rectangular
empirical
normal
0.3
Dip Statistic (for n= 18500): 0.0102
Critical value (n = 1000): 0.017
Critical value (n = 2000): 0.0112
0.2
0.1
0
2
2.5
3
3.5
4
Analysis of Results III
Likelihood ratio test
The single normal distribution model (μ,σ), the likelihood function is:
Where the data set contains n observations.
The mixture of two normal distributions, assumes that each data point belongs
to one of two sub-population. The likelihood of this function is given as:
The likelihood ratio can be calculated from Lu (uni-modal) and Lb (bi-modal) as
follows:
Analysis of Results IV
Significance of difference of means
 Two sample t test of equality of means
 T1 equal variances
 T2 different variances
Analysis of Results VI
PDF and CDF
Kernel density
estimator:
Adaptive
Kernel Density Estimator for univariate
data.
(choice
of
band
width
h
determines the amount of smoothing. If
a long tailed distribution, fixed band
hairness histogram
0.45
width suffer from constant width across
h = 0.33226
0.4
0.35
the entire sample (noise). For very small
Rel. Freq.
0.3
band width an over smoothing may
0.25
0.2
0.15
0.1
occur .
0.05
0
MATLAB AKDEST 1D- evaluates the
univ-ariate Adaptive Kernel
Estimate with kernel.
Density
0
1
2
3
4
5
i
cdf ( j ) 
x
i 1
N
(i )
x
i 1
(i )
, x(i )  x(i 1)
6
7
8
9
Bi-modality of Yarn Hairiness
Mixed Gaussian Distribution
3.46
Parameter estimation of mixture
of two Gaussians model
Vortex
RS
Siro
Plied
OE
Compact
Mean L short
hair
2.75
3.90
3.24
4.47
4.08
3.21
Mean L long hair
3.46
5.87
4.48
6.80
5.75
4.71
SD1 short hair
0.32
0.61
0.36
0.63
0.56
0.41
SD2 long hair
0.52
1.21
0.95
1.89
1.14
0.66
Portion short
hair
0.42
0.32
0.27
0.22
0.27
0.41
Portion long hair
0.57
0.67
0.73
0.77
0.70
0.57
3.462
5.30
4.26
6.32
5.34
4.12
SD hair
0.77
1.49
1.07
1.99
1.39
1.01
Bimodal
separation
0.70
0.54
0.47
0.46
0.49
0.70
T test
statistic
189.6
146.24
127.56
125.23
131.4
190.73
total hair/cm
Basic definitions of Time
Series
• Since,
the yarn hairiness is measured at equal-distance
(equal time interval), the data obtained could be analyzed on
the base of time series.
•A time series is a sequence of observations taken sequentially
in time. The nature of the dependence among observations of
a time series is of considerable practical interest.
•First of all, one should investigate the stationarity of the
system.
•Stationary model assumes that the process remains in
equilibrium about a constant mean level. The random process
is strictly stationary if all statistical characteristics and
distributions are independent on ensemble location.
•Many tests such as nonparametric test, run test, variability
(difference test), cumulative periodogram construction are
provided to explore the stationarity of the process. The H-yarn
is capable of estimating all of these parameters.
Basic assumptions I
Let the Hairiness relative deviation y(i) is a series “spatial” realization of random
process y = y(di).
For analysis of this series it is necessary to know if some basic assumptions about
behavior of underlying random process can be accepted
hair relative deviation [%] - 8 SEGMENTS
stationarity
ergodicity
independency
realization
800
600
yj(i) for j = const. , j = 1..N
400
200
ensemble
0
Real yarn
-200
0
5
10
15
length L [m]
20
25
yj(i) for i = const. , j = 1..M
In fact the realizations of random process are yj(i), where index j correspond to
individual realizations and index i corresponds to the distance di . In the case of
ensemble,samples, there are values yj(i) for i = const. and j = 1..M at disposal. In
majority of applications the ensemble samples are not available and statistical
analysis is based on the one spatial realization yj(i) for j = 1 and i = 1..N.
For creation of data distribution and computation of
moments, additional assumptions are necessary
Stationarity
The random process is strictly stationary if all statistical characteristics and
distributions are independent on ensemble location, i.e. the mean and
variance, do not change over time or position.
The wide sense stationarity of g-th order implies independence of first g
moments on ensemble location.
The second order stationarity implies that:
mean value E(y(i)) = E(y) is constant (not dependent on the location di).
Variance D(y(i)) = D(y) is constant (not dependent on the location di).
autocovariance, autocorrelation and variogram, which are functions of di
and dj are not dependent on the locations but on the lag h  di  d j only
c( y(di )* y(dih ))  c(h)
E(y)=0
Ergodicity
Ergodic process - the “ensemble” mean can be replaced by
the average across the distance (from one spatial realization)
Autocorrelation R(h) =0 for all sufficiently high h
Ergodicity is very important, as the statistical characteristics
can be calculated from one single series y(i) instead of ensembles
which frequently are difficult to be obtained.
Inverse problem
Inverse problem - given a series y(i), how to
discover the characteristics of the underlying process.
Three approaches are mainly applied:
first based on random stationary processes,
second based on the self affine processes with multiscale nature,
third based on the theory of chaotic dynamics.
In reality the multi-periodic components are often mixed
with random noise.
Distribution check
In most methods for data processing based on stochastic
models, normal distribution is assumed. If the distribution is
proved to be non-normal there are three possibilities:
1. the process is linear but nonGaussian;
2. the
process
has
linear
dynamics, but the data are
result of non-linear ”static”
transformation
3. the process has non-linear
dynamics.
Real yarn
The histograms for four sub samples (division of data of 400 meter yarn into 100 meter
pieces).
Pseudo ensemble analysis
It is suitable to construct the histograms for the e.g. four quarters
of data separately and inspect non-normality or asymmetry of
distribution. The statistical characteristics (mean and variances) of
these sub series can support wide sense stationarity assumption
ENSEMBLE MEAN AND VARIANCE
The t-test statistics for
comparison of two most
distant means
ENS-MEAN
50
0
-50
0
5
10
15
length L [m]
20
25
ENS-SD
3000
2000
1000
0
0
5
10
15
length L [m]
20
25
The F ratio statistics for
comparison of two most
distant variances
Stationarity graphs I
Zero order variability diagram plot of y(i+1) on y(i). In the case of
independence the random cloud of points appears on this graph.
Autocorrelation of first order is indicated by linear trend.
First order variability diagram is constructed taking the first
differences d1(i) = y(i) – y(i-1) as new data set. The second order
variability diagram is then dependence of d1(i+1) on d1(i). This
diagram “correlates” three successive elements of series y(i).
Second order variability diagram for
the second differences d2(i) =d1(i) –
d1(i-1)
Third order variability diagram for
the third order differences d3(i)=d2(i)
–d2(i-1).
As the order of variability diagram
increases the domain of correlations
increases as well.
test 1 diff
test 2 diff
15
10
10
0
5
0
0
10
20
-10
-10
test 3 diff
50
0
0
0
10
test 4 diff
20
-20
-20
0
20
-50
-50
0
50
i
Stationarity graphs II
CU ( fi ) 
For characterization of independence
hypothesis against periodicity alternative the
cumulative periodogram C(fi) can be constructed.
For white noise series (independent
identical distribution i.i.d normally
distributed data), the plot of C(fi)
against fi would be scattered about a
straight line joining the points (0,0)
and (0.5,1). Periodicities would tend
to produce a series of neighboring
values of I(fi) which were large. The
result of periodicities therefore
bumps on the expected line. The limit
lines for 95 % confidence interval of
C(fi) are drawn at distances
I( f j)

j 1
N * s2
Cumulative periodogram
1.2
cumul. periodogram [-]
1
0.8
0.6
0.4
0.2
0
-0.2
0
0.1
0.2
0.3
rel. frequency [-]
0.4
0.5
Spatial correlations
y
x
Autocovariance function
c(i, h)  c(h)  cov( y(i) * y(i  h))  E( y(0) * y(h))
Second equality is valid for centered data E(y) = 0 and wide sense
stationarity (autocovariance is dependent on the lag h and not on
the positions i). For lag h = 0 the variance results s2 = v = c(0).
Autocorrelation function
cov( y(0) * y (h)) c(h)
R ( h) 

v
c(0)
y(i) i = 0..N-1
Autocorrelation R(1)
Simply the Autocorrelation function is a comparison
of a signal with itself as a function of time shift.
Autocorrelation coefficient of first order R(1) can be evaluated
as
Autocorrelation
) * ( y ( j  1)  y )
j
[ s ( N  1)]
2
Roughly, if R(1) is in interval
Autocorrelation
R(1) 
 ( y( j )  y
h40sussen.txt
1
1
0.99
0.99
0.98
0.98
0.97
0.97
0.96
0.96
0.95
0.95
0.94
0.94
0.93
0
50
100
Lag
2 / N  R(1)  2 / N
no autocorrelation of first order is identified.
0.93
150
Autocorrelation
N 1
Frequency domain
The Fast Fourier Transformation is used to transform from time
domain to frequency domain and back again is based on Fourier
transform and its inverse. There are many types of spectrum
analysis, PSD, Amplitude spectrum, Auto regressive frequency
spectrum, moving average frequency spectrum, ARMA freq.
Spectrum and many other types are included in Hyarn program.
m
y(i)  a0    ak *cos(k * i)  bk *sin(k * i)  i  0,.. N -1
k 1
h40sussen.txt
h40sussen.txt
Fourier Frequency Spectrum
7.5
5
5
2.5
2.5
Hair Sussen
0
0
0.0062095
1
1
0.5
0
10.505
11.609
10.478
0.2324
0.5
10.492
0
10.481
-0.5
-0.5
-1
-1.5
-1
-1.5
500
0
100
200
300
dis tance
400
1.75
1.5
1.5
1.25
PSD TISA
7.5
Hair Sussen
10
Hair Sussen
Hair Sussen
r^2=1e-08 SE=0.994675 F=9.2185e-06
10
1.75
50
1.25
1
1
90
0.75
0.75
0.5
0.5
95
99
0.25
99.9
0
0
5
10
15
Frequency
20
0.25
0
25
PSD TISA
Parametric Reconstruction [7 Sine]
h40rieter.txt
Hurst Exponent
H=0.6866,
SH H = 0.001768, r2= 0.9739
1000
100
R/S
100
R/S
Fractal Dimension
Hurst Exponent
1000
10
10
1
1
10
100
n obs
1000
1
10000
The cumulative of white Identically Distribution noise is known as Brownian
motion or a random walk. The Hurst exponent is a good estimator for
measuring the fractal dimension. The Hurst equation is given as
. The parameter H is the Hurst exponent.
In measurement of surface profile (R(h)), the data are available through one
dimensional line transect surface. The fractal dimension can be measured by
2-H. In this case the cumulative of white noise will be 1.5. More useful is
expressing the fractal dimension by 1/H using probability space rather than
geometrical space.
Fractal dimension D is then number between 1 (for smooth curve) and 2.
One of best methods for evaluation of or H is based on the power spectral density
For small Fourier frequencies
k  2 *  * fk  2 *  * k / N k  1,.. m
g ( )   (1 )   0

is often evaluated from empirical representation
of the log of power spectral density versus log
frequency.
Summary of results of ACF, Power
Spectrum and Hurst Exponent
Samples from H-Yarn program. Autocorrelation, Spectrum, Hurst graphs.
Conclusions
• The yarn hairiness distribution can be fitted to a bimodal model distribution.
described by two mixed Gaussian distributions.
The portion, mean and the standard deviation of each component leads to
deeper understanding and evaluation of hairiness.
• This method is quick compared to image analysis system,
• The Hyarn system is a powerful program for evaluation and analysis of yarn
hairiness as a dynamic process, in both time and frequency domain.
• H-yarn program is capable of estimating the short and long term dependency.
Hairiness of Vortex yarn is lowest followed by compact yarns.
Siro spun yarns have less values compared to ring and plied and open-end yarns.
This is mainly observed due to the short component and the portion of hairs.
The highest values of hairiness belong to plied yarns, since they pass more
operations (doubling and twisting).
THANK YOU