e %Sr/SQrSSKrSSKrSSKrSSKrSSKJr SSKJ r SSK J r J r J r SSKJrJr SSKJrJrJrJrJrJrJrJrJr SS KJrJrJrJrJrJrJ r J!r!J"r"J#r# SS K$J%r% SS K&J'r' SS K(J)r)J*r*J+r+ \"S 5r,\r-"SS\.5r/Sr0SbSjr1Sr2Sr3Sr4Sr5ScSjr6SSSSS.S\7\84Sjjr9S\:S\:S\:4Sjr;S \RxRz-S!-r>\:\?S"'S\:S\:S\84S#jr@S\:S\:S\ 4S$jrAS%rBSbS&jrCS'rDSbS(jrES)rFS*rGS+rHSdS,jrIS-rJS.rKSeSS0.S1jjrLS2S3S4.S5jrMSbS6jrNSbS7jrOSbS8jrPSbS9jrQS:rRS;\8S<\8S\84S=jrSS>rTS?S@.SAjrU\*"SBSC5rVSSD.SEjrWSFrXSSGKYJXrX "SHSI5r[SfSJjr\SKr]\\"\]SLSMSN9r^SOr_\\"\_SPSQSN9r`\["5RSRSSSTSU\^\`SVSWSX. rb\bS/\bSY'\bSZ\bS['\bS\\bS]'\bS^\bS_'SeSS`.Sajjrcg!\Za N|f=f)ga Basic statistics module. This module provides functions for calculating statistics of data, including averages, variance, and standard deviation. Calculating averages -------------------- ================== ================================================== Function Description ================== ================================================== mean Arithmetic mean (average) of data. fmean Fast, floating-point arithmetic mean. geometric_mean Geometric mean of data. harmonic_mean Harmonic mean of data. median Median (middle value) of data. median_low Low median of data. median_high High median of data. median_grouped Median, or 50th percentile, of grouped data. mode Mode (most common value) of data. multimode List of modes (most common values of data). quantiles Divide data into intervals with equal probability. ================== ================================================== Calculate the arithmetic mean ("the average") of data: >>> mean([-1.0, 2.5, 3.25, 5.75]) 2.625 Calculate the standard median of discrete data: >>> median([2, 3, 4, 5]) 3.5 Calculate the median, or 50th percentile, of data grouped into class intervals centred on the data values provided. E.g. if your data points are rounded to the nearest whole number: >>> median_grouped([2, 2, 3, 3, 3, 4]) #doctest: +ELLIPSIS 2.8333333333... This should be interpreted in this way: you have two data points in the class interval 1.5-2.5, three data points in the class interval 2.5-3.5, and one in the class interval 3.5-4.5. The median of these data points is 2.8333... Calculating variability or spread --------------------------------- ================== ============================================= Function Description ================== ============================================= pvariance Population variance of data. variance Sample variance of data. pstdev Population standard deviation of data. stdev Sample standard deviation of data. ================== ============================================= Calculate the standard deviation of sample data: >>> stdev([2.5, 3.25, 5.5, 11.25, 11.75]) #doctest: +ELLIPSIS 4.38961843444... If you have previously calculated the mean, you can pass it as the optional second argument to the four "spread" functions to avoid recalculating it: >>> data = [1, 2, 2, 4, 4, 4, 5, 6] >>> mu = mean(data) >>> pvariance(data, mu) 2.5 Statistics for relations between two inputs ------------------------------------------- ================== ==================================================== Function Description ================== ==================================================== covariance Sample covariance for two variables. correlation Pearson's correlation coefficient for two variables. linear_regression Intercept and slope for simple linear regression. ================== ==================================================== Calculate covariance, Pearson's correlation, and simple linear regression for two inputs: >>> x = [1, 2, 3, 4, 5, 6, 7, 8, 9] >>> y = [1, 2, 3, 1, 2, 3, 1, 2, 3] >>> covariance(x, y) 0.75 >>> correlation(x, y) #doctest: +ELLIPSIS 0.31622776601... >>> linear_regression(x, y) #doctest: LinearRegression(slope=0.1, intercept=1.5) Exceptions ---------- A single exception is defined: StatisticsError is a subclass of ValueError. ) NormalDistStatisticsError correlation covariancefmeangeometric_mean harmonic_meankde kde_randomlinear_regressionmeanmedianmedian_grouped median_high median_lowmode multimodepstdev pvariance quantilesstdevvarianceNFraction)Decimal)countgroupbyrepeat) bisect_left bisect_right) hypotsqrtfabsexperftaulogfsumsumprod) isfiniteisinfpicossintancoshasinatanacos)reduce) itemgetter)Counter namedtuple defaultdict@c\rSrSrSrg)rN)__name__ __module__ __qualname____firstlineno____static_attributes__r<0/opt/imh/python3.13/lib/python3.13/statistics.pyrrsrBrcSn[5nURn0nURn[U[5H9upgU"U5 [ [ U5HupUS- nU"U S5U-XI'M M; SU;aUSn [U 5(aeO [SUR555n [[U[5n XU4$)aT_sum(data) -> (type, sum, count) Return a high-precision sum of the given numeric data as a fraction, together with the type to be converted to and the count of items. Examples -------- >>> _sum([3, 2.25, 4.5, -0.5, 0.25]) (, Fraction(19, 2), 5) Some sources of round-off error will be avoided: # Built-in sum returns zero. >>> _sum([1e50, 1, -1e50] * 1000) (, Fraction(1000, 1), 3000) Fractions and Decimals are also supported: >>> from fractions import Fraction as F >>> _sum([F(2, 3), F(7, 5), F(1, 4), F(5, 6)]) (, Fraction(63, 20), 4) >>> from decimal import Decimal as D >>> data = [D("0.1375"), D("0.2108"), D("0.3061"), D("0.0419")] >>> _sum(data) (, Fraction(6963, 10000), 4) Mixed types are currently treated as an error, except that int is allowed. rNc3<# UHup[X!5v M g7fNr.0dns rC _sum..s@/?tqHQNN/?) setaddgetrtypemap _exact_ratio _isfinitesumitemsr4_coerceint) datartypes types_addpartials partials_gettypvaluesrKrJtotalTs rC_sumrcs@ E EE IH<# UHoT- =mT-v M g7frGr<)rIxcrJs rCrL_ss..s>!!> @K,=,=,?@ @D|/A/A/CDD{RW$- Jws#A Au rBcpUR5$![a [R"U5s$f=frG) is_finiteAttributeErrormathr*)rfs rCrUrUs1 {{}  }}Q s  55c U[LdS5eXLaU$U[Ld U[LaU$U[LaU$[X5(aU$[X5(aU$[U[5(aU$[U[5(aU$[U[5(a[U[5(aU$[U[5(a[U[5(aU$Sn[ X R UR 4-5e)zCoerce types T and S to a common type, or raise TypeError. Coercion rules are currently an implementation detail. See the CoerceTest test class in test_statistics for details. zinitial type T is boolz"don't know how to coerce %s and %s)boolrY issubclassrfloat TypeErrorr=)rbSmsgs rCrXrXs D=222= vqCx19axCx(!(!(!S1H!S1H!X:a#7#7!U 1h 7 7 .C C::qzz22 33rBc(UR5$![a O+[[4a [ U5(aeUS4s$f=fUR UR 4$![a% S[U5RS3n[U5ef=f)zReturn Real number x to exact (numerator, denominator) pair. >>> _exact_ratio(0.25) (1, 4) x is expected to be an int, Fraction, Decimal or float. Nzcan't convert type 'z' to numerator/denominator) as_integer_ratiors OverflowError ValueErrorrU numerator denominatorrRr=ry)rfr{s rCrTrT#s<!!##   : &Q<<4y Q]]++ $T!W%5%5$66PQns  A%AA A""/Bc [U5ULaU$[U[5(aURS:wa[nU"U5$![ a> [U[ 5(a'U"UR5U"UR5- s$ef=f)z&Convert value to given numeric type T.rE)rRrwrYrrxryrr)valuerbs rC_convertrQs E{a !Se//14 x  a ! !U__%%*;*;(<< <  sAAB  B c#H# UHnUS:a [U5eUv M g7f)z7Iterate over values, failing if any are less than zero.rN)r)r`errmsgrfs rC _fail_negrcs&  q5!&) )s "FaveragerE)keyreversetiesstartreturncJUS:wa[SU<35eUb [X5n[[U[ 55US9nUS- nS/[ U5-n[ U[S5S9H8up[U 5n [ U 5n XkS-S- -n U H upXU'M Xk- nM: U$)aRank order a dataset. The lowest value has rank 1. Ties are averaged so that equal values receive the same rank: >>> data = [31, 56, 31, 25, 75, 18] >>> _rank(data) [3.5, 5.0, 3.5, 2.0, 6.0, 1.0] The operation is idempotent: >>> _rank([3.5, 5.0, 3.5, 2.0, 6.0, 1.0]) [3.5, 5.0, 3.5, 2.0, 6.0, 1.0] It is possible to rank the data in reverse order so that the highest value has rank 1. Also, a key-function can extract the field to be ranked: >>> goals = [('eagles', 45), ('bears', 48), ('lions', 44)] >>> _rank(goals, key=itemgetter(1), reverse=True) [2.0, 1.0, 3.0] Ranks are conventionally numbered starting from one; however, setting *start* to zero allows the ranks to be used as array indices: >>> prize = ['Gold', 'Silver', 'Bronze', 'Certificate'] >>> scores = [8.1, 7.3, 9.4, 8.3] >>> [prize[int(i)] for i in _rank(scores, start=0, reverse=True)] ['Bronze', 'Certificate', 'Gold', 'Silver'] rzUnknown tie resolution method: )rrEr)r) rrSsortedziprlenrr5list)rZrrrrval_posiresult_ggroupsizerankrorig_poss rC_rankrksJ y:4(CDD 3~Suw'9G  AS3w< FZ]3Q5z1H>!$OE#8  %  4 MrBrKmcL[R"X-5nX"U-U-U:g-$)zFSquare root of n/m, rounded to the nearest integer using round-to-odd.)rtisqrt)rKras rC_integer_sqrt_of_frac_rtors) 16A !A rBr_sqrt_bit_widthcUR5UR5- [- S-nUS:a[XSU--5U-nSnX4- $[USU--U5nSU*-nX4- $)z1Square root of n/m as a float, correctly rounded.rrrE) bit_lengthrr)rKrqrrs rC_float_sqrt_of_fracrs| !,,. (? :q@AAv-aa!e<A    "".a26k1= A2g  ""rBcUS::aU(d [S5$U*U*p[U5[U5- R5nUR5up4UR5nUR5upgSU-XG-S--XU-Xs--S--:aU$UR 5nUR5upSU-XJ-S--XU -X--S--:aU$U$)z3Square root of n/m as a Decimal, correctly rounded.rz0.0r)rr"r} next_plus next_minus) rKrrootnrdrplusnpdpminusnmdms rC_decimal_sqrt_of_fracrs  Av5> !rA21 AJ # ) ) +D  " " $FB >> D  " " $FB1uzAB 222 OO E  # # %FB1uzAB 222 KrBc\[U5upnUS:a [S5e[X#- U5$)a[Return the sample arithmetic mean of data. >>> mean([1, 2, 3, 4, 4]) 2.8 >>> from fractions import Fraction as F >>> mean([F(3, 7), F(1, 21), F(5, 3), F(1, 3)]) Fraction(13, 21) >>> from decimal import Decimal as D >>> mean([D("0.5"), D("0.75"), D("0.625"), D("0.375")]) Decimal('0.5625') If ``data`` is empty, StatisticsError will be raised. rEz%mean requires at least one data point)rcrr)rZrbrarKs rCr r s3 t*KAa1uEFF EIq !!rBc~^Uc.[U5m[U5nT(d [S5eUT- $[ U[ [ 45(d [ U5n[X5n[U5nU(d [S5eXE- $![a SmU4SjnU"U5nNf=f![a [S5ef=f)zConvert data to floats and compute the arithmetic mean. This runs faster than the mean() function and it always returns a float. If the input dataset is empty, it raises a StatisticsError. >>> fmean([3.5, 4.0, 5.25]) 4.25 rc3>># [USS9H umnUv M g7f)NrEr) enumerate)iterablerfrKs rCrfmean..counts %ha8DAqG9sz&fmean requires at least one data pointz(data and weights must be the same lengthzsum of weights must be non-zero) rryr(r isinstancertupler)r)rZweightsrranumdenrKs @rCrrs D AT !"JK Kqy ge} - -w-Jd$ w-C ?@@ 9+ A ;D  JHIIJs B B&B#"B#&B<cJ^^SmSmUU4Sjn[[[U"U555nT(d [S5e[R "U5(a[R $T(a&U[R:Xa[R $S$[UT- 5$)aUConvert data to floats and compute the geometric mean. Raises a StatisticsError if the input dataset is empty or if it contains a negative value. Returns zero if the product of inputs is zero. No special efforts are made to achieve exact results. (However, this may change in the future.) >>> round(geometric_mean([54, 24, 36]), 9) 36.0 rFc3># [USS9HAumnUS:d[R"U5(aUv M-US:XaSmM7[SU5e g7f)NrErTzNo negative inputs allowed)rrtisnanr)rrf found_zerorKs rCcount_positive&geometric_mean..count_positive"sMha0DAq3w$**Q--c! %&BAFF 1sAAzMust have a non-empty datasetr) r(rSr'rrtrnaninfr$)rZrrarrKs @@rCrrs AJG S../ 0E =>> zz%xx DHH,txx5#5 uqy>rBc[U5ULa [U5nSn[U5nUS:a [S5eUS:XaKUcHUSn[ U[ R [45(aUS:a [U5eU$[S5eUc[SU5nUnOQ[U5ULa [U5n[U5U:wa [S5e[S[X555upen[X5n[S[X555upxn US::a [S 5e[XX- U5$![a gf=f) aReturn the harmonic mean of data. The harmonic mean is the reciprocal of the arithmetic mean of the reciprocals of the data. It can be used for averaging ratios or rates, for example speeds. Suppose a car travels 40 km/hr for 5 km and then speeds-up to 60 km/hr for another 5 km. What is the average speed? >>> harmonic_mean([40, 60]) 48.0 Suppose a car travels 40 km/hr for 5 km, and when traffic clears, speeds-up to 60 km/hr for the remaining 30 km of the journey. What is the average speed? >>> harmonic_mean([40, 60], weights=[5, 30]) 56.0 If ``data`` is empty, or any element is less than zero, ``harmonic_mean`` will raise ``StatisticsError``. z.harmonic mean does not support negative valuesrEz.harmonic_mean requires at least one data pointrzunsupported typez*Number of weights does not match data sizec3$# UHov M g7frGr<)rIws rCrL harmonic_mean..bs G,Fq,Fsc3@# UHupU(aX- OSv M g7f)rNr<)rIrrfs rCrLresP=OTQquq0=OszWeighted sum must be positive)iterrrrrnumbersRealrryrrcrrZeroDivisionErrorr) rZrrrKrf sum_weightsrrbrars rCrr5sD. DzTDz =F D A1uNOO aGO G a',,0 1 11u%f--H./ /A, =G #7mG w<1 !"NO O GIg,F GG&PS=OPP% z=>> K' ++ s-)D55 EEc[U5n[U5nUS:Xa [S5eUS-S:XaXS-$US-nXS- X-S- $)a"Return the median (middle value) of numeric data. When the number of data points is odd, return the middle data point. When the number of data points is even, the median is interpolated by taking the average of the two middle values: >>> median([1, 3, 5]) 3 >>> median([1, 3, 5, 7]) 4.0 rno median for empty datarrErrr)rZrKrs rCr r msa $>> median_low([1, 3, 5]) 3 >>> median_low([1, 3, 5, 7]) 3 rrrrErrZrKs rCrrsQ $>> median_high([1, 3, 5]) 3 >>> median_high([1, 3, 5, 7]) 5 rrrrrs rCrrs5 $>> demographics = Counter({ ... 25: 172, # 20 to 30 years old ... 35: 484, # 30 to 40 years old ... 45: 387, # 40 to 50 years old ... 55: 22, # 50 to 60 years old ... 65: 6, # 60 to 70 years old ... }) The 50th percentile (median) is the 536th person out of the 1071 member cohort. That person is in the 30 to 40 year old age group. The regular median() function would assume that everyone in the tricenarian age group was exactly 35 years old. A more tenable assumption is that the 484 members of that age group are evenly distributed between 30 and 40. For that, we use median_grouped(). >>> data = list(demographics.elements()) >>> median(data) 35 >>> round(median_grouped(data, interval=10), 1) 37.5 The caller is responsible for making sure the data points are separated by exact multiples of *interval*. This is essential for getting a correct result. The function does not check this precondition. Inputs may be any numeric type that can be coerced to a float during the interpolation step. rr)loz$Value cannot be converted to a floatr9)rrrrr rxrry) rZintervalrKrfrjLcffs rCrrsV $@@As A99Bc[[U55RS5nUSS$![a [ S5Sef=f)aDReturn the most common data point from discrete or nominal data. ``mode`` assumes discrete data, and returns a single value. This is the standard treatment of the mode as commonly taught in schools: >>> mode([1, 1, 2, 3, 3, 3, 3, 4]) 3 This also works with nominal (non-numeric) data: >>> mode(["red", "blue", "blue", "red", "green", "red", "red"]) 'red' If there are multiple modes with same frequency, return the first one encountered: >>> mode(['red', 'red', 'green', 'blue', 'blue']) 'red' If *data* is empty, ``mode``, raises StatisticsError. rErzno mode for empty dataN)r6r most_common IndexErrorr)rZpairss rCrrsP. DJ  + +A .EBQx{ B67TABs -Ac[[U55nU(d/$[UR55nUR 5VVs/sHup4XB:XdM UPM snn$s snnf)a Return a list of the most frequently occurring values. Will return more than one result if there are multiple modes or an empty list if *data* is empty. >>> multimode('aabbbbbbbbcc') ['b'] >>> multimode('aabbbbccddddeeffffgg') ['b', 'd', 'f'] >>> multimode('') [] )r6rmaxr`rW)rZcountsmaxcountrrs rCrrsOT$Z F  6==?#H&,lln Jnle8IEn JJ Js A#A#normal) cumulativec^^^^^ ^ ^ ^ ^ ^^[T5m T (d [S5e[TS[[45(d [ S5eTS::a[ST<35eU==S:XaO =S:XaO O. [ S[-5m[ S5mU4S jmU4S jmS nO=S :Xa S mSmS nO=S:Xa" S[- m S[- m U 4SjmU 4SjmS nO==S:XaO =S:XaO O SmSmSnO=S:Xa SmSmSnO==S:XaO =S:XaO O SmSmSnOc==S:XaO =S :XaO O S!mS"mSnOG=S#:Xa S$mS%mSnO7S&:Xa"[S'- m [S- m U U 4S(jmU 4S)jmSnO[S*U<35eUcUUU4S+jnUUU4S,jnO&[T5m TU-m UU UUU U 4S-jnUU UUU U 4S.jnU(aS/T<S0U<3Ul U$S1T<S0U<3Ul U$)2a Kernel Density Estimation: Create a continuous probability density function or cumulative distribution function from discrete samples. The basic idea is to smooth the data using a kernel function to help draw inferences about a population from a sample. The degree of smoothing is controlled by the scaling parameter h which is called the bandwidth. Smaller values emphasize local features while larger values give smoother results. The kernel determines the relative weights of the sample data points. Generally, the choice of kernel shape does not matter as much as the more influential bandwidth smoothing parameter. Kernels that give some weight to every sample point: normal (gauss) logistic sigmoid Kernels that only give weight to sample points within the bandwidth: rectangular (uniform) triangular parabolic (epanechnikov) quartic (biweight) triweight cosine If *cumulative* is true, will return a cumulative distribution function. A StatisticsError will be raised if the data sequence is empty. Example ------- Given a sample of six data points, construct a continuous function that estimates the underlying probability density: >>> sample = [-2.1, -1.3, -0.4, 1.9, 5.1, 6.2] >>> f_hat = kde(sample, h=1.5) Compute the area under the curve: >>> area = sum(f_hat(x) for x in range(-20, 20)) >>> round(area, 4) 1.0 Plot the estimated probability density function at evenly spaced points from -6 to 10: >>> for x in range(-6, 11): ... density = f_hat(x) ... plot = ' ' * int(density * 400) + 'x' ... print(f'{x:2}: {density:.3f} {plot}') ... -6: 0.002 x -5: 0.009 x -4: 0.031 x -3: 0.070 x -2: 0.111 x -1: 0.125 x 0: 0.110 x 1: 0.086 x 2: 0.068 x 3: 0.059 x 4: 0.066 x 5: 0.082 x 6: 0.082 x 7: 0.058 x 8: 0.028 x 9: 0.009 x 10: 0.002 x Estimate P(4.5 < X <= 7.5), the probability that a new sample value will be between 4.5 and 7.5: >>> cdf = kde(sample, h=1.5, cumulative=True) >>> round(cdf(7.5) - cdf(4.5), 2) 0.22 References ---------- Kernel density estimation and its application: https://www.itm-conferences.org/articles/itmconf/pdf/2018/08/itmconf_sam2018_00037.pdf Kernel functions in common use: https://en.wikipedia.org/wiki/Kernel_(statistics)#kernel_functions_in_common_use Interactive graphical demonstration and exploration: https://demonstrations.wolfram.com/KernelDensityEstimation/ Kernel estimation of cumulative distribution function of a random variable with bounded support https://www.econstor.eu/bitstream/10419/207829/1/10.21307_stattrans-2016-037.pdf Empty data sequencer)Data sequence must contain ints or floatsr$Bandwidth h must be positive, not h=rgaussrc,>[SU-U-5T- $)N࿩r$)tsqrt2pis rCkde..s#dQhl+g5rBc,>SS[UT- 5--$N??)r%)rsqrt2s rCrrs#s1u9~!56rBNlogisticc$SS[U5-- $rr0rs rCrrs#tAw/rBc*SS[U5S-- - $Nrrrs rCrrs#s1v| 44rBsigmoidrEc >T[U5- $rGr)rc1s rCrrs "tAw,rBc2>T[[U55-$rG)r2r$rc2s rCrrs"tCF|+rB rectangularuniformcgNrr<rs rCrrs#rBcSU-S-$rr<rs rCrrs #'C-rBr triangularcS[U5- $rabsrs rCrrs #A,rBc,X-US:aSOS-U-S-$)Nrrrr<rs rCrrs!#CT:Q>DrB parabolic epanechnikovcSSX-- -$)N?rr<rs rCrrs#qu-rBc$SUS--SU--S-$)Ngпrrrr<rs rCrrs$A+a/#5rBquarticbiweightcSSX-- S--$N?rrr<rs rCrr%3;1"44rBc6SUS--SUS--- SU--S-$Ng?g?rrrr<rs rCrrs'$A+ad 2UQY>DrB triweightcSSX-- S--$N?rrr<rs rCrrrrBcBSSUS--SUS---US-- U--S-$Nr&g$I$I¿g333333?r"rrr<rs rCrrs1%419s1a4x#7!Q$#>#BCcIrBcosinerc&>T[TU-5-$rG)r-)rrr s rCrrs"s26{*rBc,>S[TU-5-S-$rr.r s rCrrs#BF +c1rBUnknown kernel name: cV>^[T5n[UUU4SjT55UT-- $)Nc3@># UHnT"TU- T- 5v M g7frGr<rIx_iKhrfs rCrL#kde..pdf..!84Cq!c'Q''4rrV)rfrKr3rZr4s` rCpdfkde..pdfs&D A8488AEB BrBcP>^[T5n[UUU4SjT55U- $)Nc3@># UHnT"TU- T- 5v M g7frGr<rIr2Wr4rfs rCrL#kde..cdf..r6r7r8)rfrKr>rZr4s` rCcdfkde..cdfs"D A84881< ^[T5T:wa[T5m [T5m[T TT- 5n[T TT-5nT Xn[ UUU4SjU55TT-- $)Nc3@># UHnT"TU- T- 5v M g7frGr<r1s rCrLr5s!=9Cq!c'Q''9r7rrrr rV) rfrr supportedr3 bandwidthrZr4rKsamples ` rCr9r:sc4yA~IFA M2AVQ]3Aq I=9==QG GrBc>^[T5T:wa[T5m [T5m[T TT- 5n[T TT-5nT Xn[ UUU4SjU5U5T- $)Nc3@># UHnT"TU- T- 5v M g7frGr<r=s rCrLr?s!>IS1s7a-((Ir7rD) rfrrrEr>rFrZr4rKrGs ` rCr@rAsa4yA~IFA M2AVQ]3Aq I>I>BQF FrBzCDF estimate with h= and kernel=zPDF estimate with h=) rrrrYrxryr"r,r__doc__)rZr4kernelrsupportr9r@r3r>rFrr rKrGrrs`` @@@@@@@@@rCr r (sH D A 344 d1gU| , ,CDDCx E1&IJJ  X 1r6lGGE5A6AG /A4AG RBRB&A+AG &]Y &A'AG &ADAG )[> )-A5AG #Y #4ADAG 4AIAG aBaB*A1AG !$9&"DE E C = = K  H H G G-1& f[A  .1& f[A  rBr exclusive)rKmethodc@US:a [S5e[U5n[U5nUS:aUS:XaXS- -$[S5eUS:XaTUS- n/n[SU5H;n[ Xd-U5upxXX- -XS-U--U- n UR U 5 M= U$US:XakUS-n/n[SU5HRnXd-U-nUS:aSOXsS- :aUS- OUnXd-Xq-- nXS- X- -XU--U- n UR U 5 MT U$[ SU<35e)aeDivide *data* into *n* continuous intervals with equal probability. Returns a list of (n - 1) cut points separating the intervals. Set *n* to 4 for quartiles (the default). Set *n* to 10 for deciles. Set *n* to 100 for percentiles which gives the 99 cuts points that separate *data* in to 100 equal sized groups. The *data* can be any iterable containing sample. The cut points are linearly interpolated between data points. If *method* is set to *inclusive*, *data* is treated as population data. The minimum value is treated as the 0th percentile and the maximum value is treated as the 100th percentile. rEzn must be at least 1rz!must have at least one data point inclusiverNUnknown method: )rrrrangedivmodappendr) rZrKrOldrrrrdelta interpolateds rCrr!s\  1u455 $ !ABB  Fq!AaeQ'HA Gqy1DQK%4GG1LL MM, '   Fq!A AUqD1aAC!#IE QK195%G1LL MM, '   'z2 33rBcb[X5up#pEUS:a [S5e[X5S- - U5$)a^Return the sample variance of data. data should be an iterable of Real-valued numbers, with at least two values. The optional argument xbar, if given, should be the mean of the data. If it is missing or None, the mean is automatically calculated. Use this function when your data is a sample from a population. To calculate the variance from the entire population, see ``pvariance``. Examples: >>> data = [2.75, 1.75, 1.25, 0.25, 0.5, 1.25, 3.5] >>> variance(data) 1.3720238095238095 If you have already calculated the mean of your data, you can pass it as the optional second argument ``xbar`` to avoid recalculating it: >>> m = mean(data) >>> variance(data, m) 1.3720238095238095 This function does not check that ``xbar`` is actually the mean of ``data``. Giving arbitrary values for ``xbar`` may lead to invalid or impossible results. Decimals and Fractions are supported: >>> from decimal import Decimal as D >>> variance([D("27.5"), D("30.25"), D("30.25"), D("34.5"), D("41.75")]) Decimal('31.01875') >>> from fractions import Fraction as F >>> variance([F(1, 6), F(1, 2), F(5, 3)]) Fraction(67, 108) rz*variance requires at least two data pointsrErprr)rZxbarrbssrgrKs rCrrWs8Ld/KA11uJKK Ba%L! $$rBc\[X5up#pEUS:a [S5e[X5- U5$)aReturn the population variance of ``data``. data should be a sequence or iterable of Real-valued numbers, with at least one value. The optional argument mu, if given, should be the mean of the data. If it is missing or None, the mean is automatically calculated. Use this function to calculate the variance from the entire population. To estimate the variance from a sample, the ``variance`` function is usually a better choice. Examples: >>> data = [0.0, 0.25, 0.25, 1.25, 1.5, 1.75, 2.75, 3.25] >>> pvariance(data) 1.25 If you have already calculated the mean of the data, you can pass it as the optional second argument to avoid recalculating it: >>> mu = mean(data) >>> pvariance(data, mu) 1.25 Decimals and Fractions are supported: >>> from decimal import Decimal as D >>> pvariance([D("27.5"), D("30.25"), D("30.25"), D("34.5"), D("41.75")]) Decimal('24.815') >>> from fractions import Fraction as F >>> pvariance([F(1, 4), F(5, 4), F(1, 2)]) Fraction(13, 72) rEz*pvariance requires at least one data pointrZ)rZmurbr\rgrKs rCrrs4Fd-KA11uJKK BFA rBc[X5up#pEUS:a [S5eX5S- - n[U[5(a [ UR UR 5$[UR UR 5$)zReturn the square root of the sample variance. See ``variance`` for arguments and other details. >>> stdev([1.5, 2.5, 2.5, 2.75, 3.25, 4.75]) 1.0810874155219827 r'stdev requires at least two data pointsrErprrwrrrrr)rZr[rbr\rgrKmsss rCrrsfd/KA11uGHH A,C!W$S]]COODD s}}coo >>rBc[X5up#pEUS:a [S5eX5- n[U[5(a [ UR UR 5$[UR UR 5$)zReturn the square root of the population variance. See ``pvariance`` for arguments and other details. >>> pstdev([1.5, 2.5, 2.5, 2.75, 3.25, 4.75]) 0.986893273527251 rEz'pstdev requires at least one data pointra)rZr^rbr\rgrKrbs rCrrsbd-KA11uGHH &C!W$S]]COODD s}}coo >>rBc [U5upp4US:a [S5eX$S- - n[U5[URUR 54$![ a% [U5[U5[U5- 4s$f=f)zFIn one pass, compute the mean and sample standard deviation as floats.rr`rE)rprrxrrrrs)rZrbr\r[rKrbs rC _mean_stdevres|YNA41uGHH A,C4T{/ sOOO 4T{E$K%)3334s*A,BBrfycT[X-5n[U5(dG[U5(a5[U5(d%[U5(dSn[X0-X1-5U- $U$U(d%U(aU(aSn[X0-X1-5U- $U$[ X4X*45nX$SU-- -$)zRReturn sqrt(x * y) computed with improved accuracy and without overflow/underflow.ggar9)r"r*r+ _sqrtprodr))rfrfr4scalerJs rCrhrhs QU A A;; 88E!HHU1XXEUY 2U: : EUY 2U: : B A C!G} rBc^^[U5n[U5U:wa [S5eUS:a [S5e[U5U- m[U5U- m[U4SjU5U4SjU55nX2S- - $)a@Covariance Return the sample covariance of two inputs *x* and *y*. Covariance is a measure of the joint variability of two inputs. >>> x = [1, 2, 3, 4, 5, 6, 7, 8, 9] >>> y = [1, 2, 3, 1, 2, 3, 1, 2, 3] >>> covariance(x, y) 0.75 >>> z = [9, 8, 7, 6, 5, 4, 3, 2, 1] >>> covariance(x, z) -7.5 >>> covariance(z, x) -7.5 zDcovariance requires that both inputs have same number of data pointsrz,covariance requires at least two data pointsc3,># UH oT- v M g7frGr<)rIxir[s rCrLcovariance..s)q9qc3,># UH oT- v M g7frGr<rIyiybars rCrLrms+B"IrnrE)rrr(r))rfrfrKsxyr[rrs @@rCrrsv" AA 1v{dee1uLMM 7Q;D 7Q;D )q)+B+B CC a%=rBlinear)rOc[U5n[U5U:wa [S5eUS:a [S5eUS;a[SU<35eUS:XaUS- S- n[XS 9n[XS 9nOD[ U5U- n[ U5U- nUVs/sHowU- PM nnUVs/sHoU- PM nn[ X5n [ X5n [ X5n U [ X5- $s snfs snf![a [S 5ef=f) a(Pearson's correlation coefficient Return the Pearson's correlation coefficient for two inputs. Pearson's correlation coefficient *r* takes values between -1 and +1. It measures the strength and direction of a linear relationship. >>> x = [1, 2, 3, 4, 5, 6, 7, 8, 9] >>> y = [9, 8, 7, 6, 5, 4, 3, 2, 1] >>> correlation(x, x) 1.0 >>> correlation(x, y) -1.0 If *method* is "ranked", computes Spearman's rank correlation coefficient for two inputs. The data is replaced by ranks. Ties are averaged so that equal values receive the same rank. The resulting coefficient measures the strength of a monotonic relationship. Spearman's rank correlation coefficient is appropriate for ordinal data or for continuous data that doesn't meet the linear proportion requirement for Pearson's correlation coefficient. zEcorrelation requires that both inputs have same number of data pointsrz-correlation requires at least two data points>rtrankedrRrvrErrz&at least one of the inputs is constant)rrrrr(r)rhr) rfrfrOrKrr[rrrlrqrsrosyys rCrrs. AA 1v{eff1uMNN ))+F:677 Q"  ! ! ! !Aw{Aw{!" #2$Y #!" #2$Y # !-C !-C !-CHYs((( $ # HFGGHs C!!C& C++DLinearRegressionslope intercept) proportionalc^ [U5n[U5U:wa [S5eUS:a [S5eU(d<[U5U- n[U5U- m UVs/sHoUU- PM nnU 4SjU5n[X5S-n[X5nXg- nU(aSOT UW-- n [ XS9$s snf![a [S5ef=f)aOSlope and intercept for simple linear regression. Return the slope and intercept of simple linear regression parameters estimated using ordinary least squares. Simple linear regression describes relationship between an independent variable *x* and a dependent variable *y* in terms of a linear function: y = slope * x + intercept + noise where *slope* and *intercept* are the regression parameters that are estimated, and noise represents the variability of the data that was not explained by the linear regression (it is equal to the difference between predicted and actual values of the dependent variable). The parameters are returned as a named tuple. >>> x = [1, 2, 3, 4, 5] >>> noise = NormalDist().samples(5, seed=42) >>> y = [3 * x[i] + 2 + noise[i] for i in range(5)] >>> linear_regression(x, y) #doctest: +ELLIPSIS LinearRegression(slope=3.17495..., intercept=1.00925...) If *proportional* is true, the independent variable *x* and the dependent variable *y* are assumed to be directly proportional. The data is fit to a line passing through the origin. Since the *intercept* will always be 0.0, the underlying linear function simplifies to: y = slope * x + noise >>> y = [3 * x[i] + noise[i] for i in range(5)] >>> linear_regression(x, y, proportional=True) #doctest: +ELLIPSIS LinearRegression(slope=2.90475..., intercept=0.0) zKlinear regression requires that both inputs have same number of data pointsrz3linear regression requires at least two data pointsc3,># UH oT- v M g7frGr<rps rCrL$linear_regression..ws #2$Yrnrz x is constantry)rrr(r)rrx) rfrfr|rKr[rlrsrorzr{rrs @rCr r HsL AA 1v{kll1uSTT Aw{Aw{!" #2$Y # # # !-# C !-C/ $ )l@gN@g"]Ξ@gnC`@gu @giK~j@gv|E@gd|1@gfRr@gu.2@g~y@gn8(E@rrg@g?g鬷ZaI?ggElD?g7\?guSS?g=. @gj%b@gHw@gjRe?g9dh? >g('߿A?g~z ?g@3?gɅ3?g3fRx?gIFl@gt>g*Yn>gESB\T?gN;A+?gUR1?gEF?gPn@g&>@gigtcI,\>gŝI?g*F2v?gC4?gO1?)r#r"r')pr^sigmarrrrrfs rC_normal_dist_inv_cdfrs CA Aw% qu 0140145601456115661 156 6 1 1 56 6 1 1 56 6115661140145601456115661 156 6 1 1 56 6 1 1 56 6 IY #X37A c!fW ACx G1A51256712567226772 267 7 2 2 67 7 2 2 67 7222A51256712567226772 267 7 2 2 67 7 2 2 67 7 G1A51256712567226772 267 7 2 2 67 7 2 2 67 7223Q61256712567226772 267 7 2 2 67 7 2 2 67 7 A3w B U rB)rc\rSrSrSrSSS.rS"Sjr\S5rSS .S jr S r S r S r S#Sjr SrSr\S5r\S5r\S5r\S5r\S5rSrSrSrSrSrSr\rSr\rSrSr Sr!S r"S!r#Sr$g)$riz(Normal distribution of a random variablez(Arithmetic mean of a normal distributionz+Standard deviation of a normal distribution_mu_sigmacfUS:a [S5e[U5Ul[U5Ulg)zDNormalDist where mu is the mean and sigma is the standard deviation.rzsigma must be non-negativeN)rrxrr)selfr^rs rC__init__NormalDist.__init__s+ 3;!">? ?9El rBcU"[U56$)z5Make a normal distribution instance from sample data.)re)clsrZs rC from_samplesNormalDist.from_samplessK%&&rBNseedcUc[RO[R"U5Rn[nURnURn[ SU5Vs/sHot"U"5XV5PM sn$s snf)z=Generate *n* samples for a given mean and standard deviation.N)randomRandomrrrr)rrKrrndinv_cdfr^rrs rCsamplesNormalDist.sampless^#|fmmt1D1K1K& XX 39$?C?ar)?CCCs A:cURUR-nU(d [S5eXR- n[X3-SU-- 5[ [ U-5- $)z4Probability density function. P(x <= X < x+dx) / dxz$pdf() not defined when sigma is zerog)rrrr$r"r&)rrfrdiffs rCr9NormalDist.pdfsR;;,!"HI I88|4;$/23d3>6JJJrBcUR(d [S5eSS[XR- UR[-- 5--$)z,Cumulative distribution function. P(X <= x)z$cdf() not defined when sigma is zerorr)rrr%r_SQRT2rrfs rCr@NormalDist.cdfs>{{!"HI IcCXX$++2F GHHIIrBcpUS::dUS:a [S5e[XRUR5$)a#Inverse cumulative distribution function. x : P(X <= x) = p Finds the value of the random variable such that the probability of the variable being less than or equal to that value equals the given probability. This function is also called the percent point function or quantile function. rrz$p must be in the range 0.0 < p < 1.0)rrrr)rrs rCrNormalDist.inv_cdfs2 8qCx!"HI I#Axx==rBcf[SU5Vs/sHo RX!- 5PM sn$s snf)aFDivide into *n* continuous intervals with equal probability. Returns a list of (n - 1) cut points separating the intervals. Set *n* to 4 for quartiles (the default). Set *n* to 10 for deciles. Set *n* to 100 for percentiles which gives the 99 cuts points that separate the normal distribution in to 100 equal sized groups. rE)rSr)rrKrs rCrNormalDist.quantiless+.31a[9[ QU#[999s.c 4[U[5(d [S5eXp2URUR4URUR4:aX2p2UR UR pTU(aU(d [ S5eXT- n[URUR- 5nU(d%S[USUR-[-- 5- $URU-URU-- nURUR-[Xw-U[XT- 5--5-n X-U- n X- U- n S[URU 5URU 5- 5[URU 5URU 5- 5-- $)azCompute the overlapping coefficient (OVL) between two normal distributions. Measures the agreement between two normal probability distributions. Returns a value between 0.0 and 1.0 giving the overlapping area in the two underlying probability density functions. >>> N1 = NormalDist(2.4, 1.6) >>> N2 = NormalDist(3.2, 2.0) >>> N1.overlap(N2) 0.8035050657330205 z$Expected another NormalDist instancez(overlap() not defined when sigma is zerorr9) rrryrrrrr#r%rr"r'r@) rotherXYX_varY_vardvrrbx1x2s rCoverlapNormalDist.overlapsN %,,BC C1 HHaee !%%0 0qzz1::uE!"LM M ] !%%!%%- R3>F#:;<< < EEEMAEEEM ) HHqxx $rwc%-6H1H'H"I Ier\er\d1559quuRy01DrQUU2Y9N4OOPPrBcpUR(d [S5eXR- UR- $)zCompute the Standard Score. (x - mean) / stdev Describes *x* in terms of the number of standard deviations above or below the mean of the normal distribution. z'zscore() not defined when sigma is zero)rrrrs rCzscoreNormalDist.zscore=s,{{!"KL LHH  ++rBcUR$)z+Arithmetic mean of the normal distribution.rrs rCr NormalDist.meanH xxrBcUR$)z,Return the median of the normal distributionrrs rCr NormalDist.medianMrrBcUR$)zReturn the mode of the normal distribution The mode is the value x where which the probability density function (pdf) takes its maximum value. rrs rCrNormalDist.modeRs xxrBcUR$)z.Standard deviation of the normal distribution.rrs rCrNormalDist.stdev[s{{rBc4URUR-$)z!Square of the standard deviation.rrs rCrNormalDist.variance`s{{T[[((rBc[U[5(aA[URUR-[URUR55$[URU-UR5$)a:Add a constant or another NormalDist instance. If *other* is a constant, translate mu by the constant, leaving sigma unchanged. If *other* is a NormalDist, add both the means and the variances. Mathematically, this works only if the two distributions are independent or if they are jointly normally distributed. rrrr!rrrs rC__add__NormalDist.__add__eR b* % %bffrvvouRYY /JK K"&&2+ryy11rBc[U[5(aA[URUR- [URUR55$[URU- UR5$)aCSubtract a constant or another NormalDist instance. If *other* is a constant, translate by the constant mu, leaving sigma unchanged. If *other* is a NormalDist, subtract the means and add the variances. Mathematically, this works only if the two distributions are independent or if they are jointly normally distributed. rrs rC__sub__NormalDist.__sub__srrBc`[URU-UR[U5-5$)zMultiply both mu and sigma by a constant. Used for rescaling, perhaps to change measurement units. Sigma is scaled with the absolute value of the constant. rrrr#rs rC__mul__NormalDist.__mul__& "&&2+ryy48';<r?r@rK __slots__r classmethodrrr9r@rrrrpropertyr r rrrrrrrrr__radd__r__rmul__rrrrrrAr<rBrCrrs . :?I #''"&DKJ > : QD ,)) 2 2==-.HH; -P%&rBrc ^^^^UUUU4SjnU$)Nc>T"U5n[T"U5U- =n5T:a)XT"U5- -n[T"U5U- =n5T:aM)U$)u=Return x such that f(x) ≈ y within the specified tolerance.r)rfrfrrf_inv_estimatef_prime tolerances rCf_inv_newton_raphson..f_invsY 1 !A$("$#i/  " "A!A$("$#i/rBr<)rrrrrs```` rC_newton_raphsonrs LrBcUS::aSU4OSSU- 4upSU-S-S- nUSs=:aS:aO X!-$US[S U-S -5-- nX!-$) Nrrr9g鼹A?gMbp?gV-?gMp^v?g$2h@g_@r-rsignrfs rC_quartic_invcdf_estimatersks(sAhsQwGD q_$s*AEE 8O [3{Q1AAB BB 8OrBc6SUS--SUS--- SU--S-$r!r<rs rCrrs'$A+ad *UQY6[[U[-S- 55$)Nr)r'r/r,r s rCrrsSR]+rBcSU-S- $NrrEr<r s rCrrs QqS1WrBcPS[[SU-S- 5[-S- 5-$)NrrEr)r-r3r,r s rCrrs$1sD1QK"$4#9::rBcXUS:a[SU-5S- $S[SSU-- 5- $)NrrrE)r"r s rCrrs/QWD1IMK!d1qs7m:KKrBc8S[SU-S- 5-[- $r)r1r,r s rCrrsD1qM)B.rB) rrrr rrr#rr*rr r rrrrrc^^^^^ [T5nU(d [S5e[TS[[45(d [ S5eTS::a[ST<35e[ RU5mTc[SU<35e[RU5nURm URmUUUUU 4SjnST<S U<3Ul U$) a<Return a function that makes a random selection from the estimated probability density function created by kde(data, h, kernel). Providing a *seed* allows reproducible selections within a single thread. The seed may be an integer, float, str, or bytes. A StatisticsError will be raised if the *data* sequence is empty. Example: >>> data = [-2.1, -1.3, -0.4, 1.9, 5.1, 6.2] >>> rand = kde_random(data, h=1.5, seed=8675309) >>> new_selections = [rand() for i in range(10)] >>> [round(x, 1) for x in new_selections] [0.7, 6.2, 1.2, 6.9, 7.0, 1.8, 2.5, -0.5, -1.8, 5.6] rrrrrr.c6>T"T5TT"T"55--$rGr<)choicerZr4 kernel_invcdfrsrCrandkde_random..rand sd|a-"9999rBzRandom KDE selection with h=rJ) rrrrYrxry_kernel_invcdfsrQ_randomrrrrK) rZr4rLrrKprngrrrrs `` @@@rCr r s$ D A 344 d1gU| , ,CDDCx E1&IJJ#''/M 5fZ@AA >>$ D [[F [[F::3v]6+FDL KrBrG)znegative value)r)r)g-q=)drK__all__rtrrsys fractionsrdecimalr itertoolsrrrbisectrr r!r"r#r$r%r&r'r(r)r*r+r,r-r.r/r0r1r2r3 functoolsr4operatorr5 collectionsr6r7r8rrrrrcrprUrXrTrrrrxrrYr float_infomant_digr__annotations__rrr rrrr rrrrrr rrrrrrerhrrrxr r _statistics ImportErrorrrr_quartic_invcdfr_triweight_invcdfrrr r<rBrCr+shT 2  ,,,EEEKKK88 c  j 3l&R 4>+\$IQ34PU;3l3>>222Q66 #3 #3 #5 #SSW<",!H F5,p+0 ,&E+PB<K(QQr;-4l)%X&R?$?$ 45U:8$,-H`02HI057>zGV 0 \&\&B"-<24  $/A24 l""*+$:"K. +84,]; "1+">-i8 ))i  s0G**G32G3