Study of Quantile-Based Cumulative Rényi Information Measure

In this paper, we proposed a quantile version of cumulative Rényi entropy for residual and past lifetimes and study their properties. We also study quantile-based cumulative Rényi entropy for extreme order statistic when random variable untruncated or truncated in nature. Some characterization results are studied using the relationship between proposed information measure and reliability measure. We also examine it in relation to some applied problems such as weighted and equilibrium models.


Introduction
Let X be a random variable with distribution function F (x) and quantile function Q(u). Then, the quantile function of X is defined by Here and throughout the article, X is absolutely continuous nonnegative random variable with probability density function (pdf) f (x) and survival functionF (x). If f (.) is the pdf of X, then f (Q(u)) and q(u) = dQ(u) du respectively known as the density quantile function and the quantile density function (see Parzen (1979)). Using (1.1), we have F (Q(u)) = u and differentiating it with respect to u obtain q(u)f Q(u) = 1. ( The mean of the distribution assumed to be finite, is In certain cases the approach based on quantile functions is more fruitful than the use of cumulative distribution functions, since quantile functions are less influenced by extreme statistical observations. Also, there are certain properties of quantile functions that are not shared by the distribution function approach. The quantile functions used in applied works such as various forms of lambda distributions (van Standen and Loots, 2009), the power-Pareto distribution (Hankin and Lee, 2006), Govindarajulu's distribution do not have tractable distribution functions. For a detailed and recent study on quantile function and its properties in modeling and analysis we refer where h(x) = f (x) 1−F (x) is the hazard rate of X. Another useful measure closely related to hazard quantile function is the mean residual quantile function, as given by where m(t) = E(X − t|X > t) is the mean residual life function (MRLF) of X. It is well known that both hazard quantile function and mean residual quantile function uniquely determine the quantile density function q(u). Further the relationship between the quantile density function and mean residual quantile function is given by The idea of the information theoretic entropy was introduced by Shannon ( 1948) which plays an important role in diverse areas such as financial analysis, data compression, molecular biology, computer science and information theory. The average amount of uncertainty associated with the nonnegative continuous random variable X can be measured using the differential entropy function a continuous counterpart of the Shannon (1948) entropy in the discrete case. Rao et al. (2004) pointed out some basic shortcomings of the Shannon differential entropy measure. Rao et al. (2004) introduced an alternative measure of uncertainty called the cumulative residual entropy (CRE) of a random variable X with survival functionF , given by Asadi and Zohrevand (2007) have considered the dynamic cumulative residual entropy (DCRE) as the cumulative residual entropy of the residual lifetime X t = [X − t|X > t]. This is given by Di Crescenzo and Longobardi (2009) introduced a dual measure based on the cumulative distribution function F (x), called the cumulative past entropy (CPE) and its dynamic version as All the theoretical investigations and applications using these information measures are based on the distribution function. Since a probability distribution can be specified either in terms of distribution function or by the quantile function. Although both convey the same information about the distribution, with different interpretations, the concepts and methodologies based on distribution functions are traditionally employed in most forms of statistical 888 QUANTILE-BASED CUMULATIVE RÉNYI INFORMATION MEASURE theory and practice. The study of entropy functions using quantile functions is of recent interest.  have introduced the quantile version of the dynamic cumulative residual entropy (DCRE), which is defined by When u −→ 0, (11) reduces to ξ = − 1 0 (log(1 − p))(1 − p)q(p)dp, a quantile version of CRE. New models and characterizations that are unresolvable in the distribution function approach can be resolved with the help of quantile functions. Quantile functions can be properly employed to formulate properties of entropy function and other information measures for nonnegative absolutely continuous random variables refer to, Sunoj and Sankaran (2012), Sunoj et al. (2013) and Noughabi et al. (2020). There have been attempts by several authors for the parametric generalization of CRE. Zografos and Nadarajah (2005) introduced the cumulative residual Rényi entropy (CRRE) of order α as Further, Sunoj and Linu (2012) proposed a dynamic version of it as This define as the dynamic cumulative residual Rényi entropy (DCRRE) of the random variable For more properties and applications of this measure, we refer to Kayal (2015) and Minimol (2017).
This measure is much more flexible due to the parameter α enabling several measurements of uncertainty within a given distribution and increase the scope of application. Also this forms a parametric family of entropy measures that give weights to extremely rare and regular events completely different. Some properties and applications of these information theoretic measure in reliability engineering, computer vision, coding Theory and finance have been also studied by several researcher, refer to Rao (2005), Wang Kumar and Singh (2018) and Baratpour and Khammar (2018). Motivated by these, in the present study we consider survival and distribution function based cumulative residual Rényi entropy (CRRE) (residual and past) entropy measures of order α in terms of quantile functions. The present manuscript we introduce the quantile version of GCRE of order α for residual and reversed residual (past) lifetime and proved some characterization results of these for extreme order statistics. The text is organized as follows. In Section 2, we introduce the quantile-based cumulative residual Rényi entropy and its dynamic version. Section 3 proves some characterization results based on the measure considered in Section 2. In Section 4, we extend the quantile-based cumulative residual Rényi entropy in the context of order statistics and study its properties. In Section 5, we derive the weighted form of this measure and call it weighted cumulative residual Rényi quantile entropy and study some characterization results.

Cumulative residual Rényi quantile entropy
The quantile version of cumulative residual Rényi entropy of the nonnegative random variable X corresponding to (12) becomesξ and called it cumulative residual Rényi quantile entropy (CRRQE). When α → 1,ξ α reduces to − 1 0 (log(1 − p))(1 − p)q(p)dp, a quantile version of CRE, sugggested by . Equation (14) can be R. RANI AND V. KUMAR 889 written asξ Equation (15) is the expressions ofξ α in terms of the hazard quantile function K(u) respectively. There are some models that do not have any closed form expressions for cdf or pdf, but have simple quantile function or quantile density functions (see Nair et al. (2011)). Accordingly in the following example, we obtainξ α for which q(.) exists.

Example 1
Suppose X is distributed with quantile density function given by, where M and A are real constants. Further, it contains several distributions which includes Weibull when A = 1, M = b−1 b , uniform when A = 0, M = 0, Pareto when A > 1, M = 0, and rescaled beta when A < 1, M = 0. Then the CRRQE is obtained asξ where γ(.) represents the gamma function.

Example 2
A lambda family of distribution that is of interest in reliability is the Davis distribution proposed by Hankin and Lee (2006) with quantile function This is a flexible family for right skewed nonnegative data that provides good approximations to the exponential, gamma, lognormal and weibull distributions. The CRRQE (14), for Davis distribution is given as As λ 2 −→ 0, (16) reduces toξ α = 1 (1−α) log (Cλ 1 β(λ 1 , 1 + α)) , corresponding to the Power distribution. Also as λ 1 −→ 0, (16) reduces toξ α = 1 (1−α) log Cλ2 α−λ2 , corresponding to the Pareto I distribution. Example 3 A nonnegative random variable X is Weibull distributed with quantile function and the quantile density function ; 0 < u < 1. Also, the mean of the distribution is E(X) = 1 0 Q(p)dp = 1 0 (1 − p)q(p)dp, assumed to be finite then the CRRQE (14) gives , and the mean of the distribution is E(X) = . This result shows that for Weibull family, this ratio is constant. If b = 1 , then X has exponential distribution and this ratio is equal to 1 α . Example 4 When X is distributed with quantile density function given by q(u) = Ku δ (1 − u) −(A+δ) ; 0 < u < 1 where K, δ, and A are real constants. It contains several well known distributions which include the exponential (δ = 890 QUANTILE-BASED CUMULATIVE RÉNYI INFORMATION MEASURE 0; A = 1), Pareto (δ = 0; A < 1), rescalded beta (δ = 0; A > 1), the log logestic distribution (δ = λ − 1; A = 2) and Govindarajulu's distribution (δ = b − 1; A = −b, K = ab(b + 1)). Then the cumulative residual Rényi quantile entropy (14) is obtained asξ In the context of reliability and survival analysis, when the current age of a component need to be taken into account. In such cases, measuring uncertainty usingξ α is not appropriate and a modified version ofξ α is essential for such residual random variable, X t = (X − t|X > t). In this case, dynamic measure is useful to describe the information carried by the random lifetime when age changes. An equivalent definition for the dynamic cumulative residual Rényi entropy (13) in terms of quantile function is given by The measure (17) may be considered as the dynamic cumulative residual Rényi quantile entropy (DCRRQE) measure. Rewriting equation (17) and using (6), we come to Applying integration by parts on the last term and simplifying, we obtain Differentiating (18) with respect to u both sides, and using (4), (18) reduces to where prime denote the derivative with respect to u. Equation (19) provides a direct relationship between quantile density function q(u) andξ α (u). Thereforeξ α (u) uniquely determines the underlying distribution. Table 2.1 provides quantile functions of some important models and corresponding entropies. whereγ x (a, b) andβ x (a, b) known as the incomplete gamma function and incomplete beta function defined as γ x (a, b) = ∞ x y a−1 e −by dy, a, b > 0, x > 0 andβ x (a, b) = 1 x y a−1 (1 − y) b−1 dx, a, b > 0, x > 0 respectively. The next theorem gives necessary and sufficient conditions forξ α (u) to be an increasing (decreasing) function of u.

Theorem 1
Let X be a nonnegative absolutely continuous random variable having survival function F (x), thenξ α (u) is increasing (decreasing), if and only if for all u ∈ (0, 1) Proof Differentiating (17) both sides with respect to u, we obtain Using (4), we have Sinceξ α (u) is increasing (decreasing), that is,ξ α (u) ≥ (≤) 0 and for all u ∈ (0, 1). Hence above expression becomes − log K(u)−log α In many realistic situations, the random variable is not necessarily related to future only, but they can also refer to the past. Suppose at time t, one has undergone a medical test to check for a certain disease. Let us assume that the test is positive. If we denote by X the age when the patient was infected, then it is known that X < t. Now the question is, how much time has elapsed since the patient had been infected by this disease. In this situation, the random variable t X = [t − X|X ≤ t], which is known as inactivity time is suitable to describe the time elapsed between the failure of a system and the time when it is found to be 'down'. The past lifetime random variable t X is related with two relevant ageing functions, the reversed hazard rate defined The quantile versions of reversed hazard rate function and mean inactivity time are given as respectively. The relationship (6) for inactivity time becomes refer to Nair and Sankaran (2009). Analogous to cumulative residual Rényi entropy (CRRE) of order α (12), Abbasnejad (2011) proposed a cumulative entropy measure to the failure entropy and its dynamic version, which are given asξ  (25) and (24) reduces to cumulative entropy (9) and past cumulative entropy (10) respectively.  have considered the quantile version of cumulative past entropy as In analogy to (12), we propose a cumulative past Rényi quantile entropy (CPRQE) that computes the uncertainty related to past. It is defined as dp , a quantile version of cumulative Rényi entropy. Equation (27) can be written as, using (23) Applying integration by parts on the last term and simplify, we obtain Differentiating (28) with respect to u both sides, and using (21) , (28) reduces to where prime denote the derivative with respect to u. Equation (29) provides a direct relationship between quantile density function q(u) andξ α (u). Thereforeξ α (u) uniquely determines the underlying distribution.

Example 5
If X be a random variable having the Tukey lambda distribution with the quantile function Q(u) = u λ −(1−u) λ λ , 0 ≤ u ≤ 1; define for all nonzero lambda values. Then cumulative pastŔenyi quantile entropy, for Tukey lambda distribution is given asξ where β u (a, b) are known as the incomplete beta function defined as

Characterization Results
By considering a relationship between the dynamic cumulative residual Rényi quantile entropyξ α (u) and the hazard quantile function K(u). We characterize some lifetime distributions based on the quanlile entropy measure (17). We give the following theorem.

Theorem 2
Let X be a random varible with hazard quantile function K(u) for all u ∈ (0, 1). The relationship where c is constant, holds if and only if X follow generalized Pareto distribution with quantile function

Proof
The hazard quantile function of generalized Pareto distribution is gives the if part of the theorem. To prove the only if part, consider (31) to be valid. Then Using (4), we have Differentiating both side with respect to u and after some algebraic simplification, we have

This gives
where A is the constant. Substituting the value of c, this gives which characterizes the generalized Pareto distribution. Hence proved.
Next we extend the result to more general case where c is a function of u.

Theorem 3
Let X be a nonnegative absolutely continuous random variable with hazard quantile function K(u) and the Proof Let (32) be valid. Then

QUANTILE-BASED CUMULATIVE RÉNYI INFORMATION MEASURE
Substituting the value of K(u) from (4), we have Differentiating both side with respect to u and after some algebraic simplifying, we have where prime denotes derivative with respect to u. Integrating with respect to u both side between 0 to u in the above expression and simplifying, we obtain In particular, if c(u) = au + b and a, b ≥ 0 then above gives Further we note that expression (34) for a = 0, gives the characterization result given by Theorem (31).
The following theorem gives another characterization of generalized Pareto distribution using the relationship betweenξ α (u) and mean residual quantile function M (u), the proof of which follow on the same line as given by Theorem (3), hence omitted.

Theorem 4
Let X be a random varible with mean residual quantile function M (u) for all u ∈ (0, 1). The relationship where c is constant, holds if and only if X follow generalized Pareto distribution with quantile function

Theorem 5
For a nonnegative random variable X, the relationship where C is constant, holds for all u ∈ (0, 1) if and only if X follows (i) uniform distribution for C = 1 (ii) exponential distribution for C = 0 (iii) Pareto I distribution for C = −1 a

895
After some algebraic simplification, we obtain Differentiating both side with respect to u and after simplification, we get This gives where K is the constant of integration. Now, if C = 1 and K = log(b − a); b > a, which implies that Q(u) = a + (b − a)u. Thus, we have the uniform distribution. If C = 0 and K = − log λ; λ ≥ 0, which implies that Q(u) = −λ −1 log(1 − u). Thus, we have the exponential distribution with parameter λ. If, C = −1 a and K = log( b a ), that a and b are positive constants, we have Q a . This means, we have the Pareto I distribution.
In the following theorem we characterize the power distribution, when CPRQEξ α (u) is expressed in terms of reverse hazard quantile functionK(u) (21).

Theorem 6
Let X be a nonnegative continuous random variable with reverse hazard quantile functionK(u) for all u ∈ (0, 1) and cumulative past Rényi quantile entropyξ α (u) given bȳ If and only if X has power distribution function.

Proof
The reverse hazard quantile function of power distribution isK(u) = gives the if part of the theorem. To prove the only if part, consider (37) to be valid. Using (27), it gives u 0 p α q(p)dp u α = c K(u) .
Differentiating both side with respect to u and simplifying, this reduces to where A is a constant. Which characterizes the power distribution for c = b bα+1 . Next we characterize the lifetime models when CPRQE (27) is expressed in terms of quantile version of mean inactivity timeM (u). The proof follows on the same line as Theorem (3.5) , hence omitted.

Theorem 7
Let X be a nonnegative continuous random variable with mean residual quantile functionM (u) for all u ∈ (0, 1) and cumulative past Rényi quantile entropyξ α (u) given bȳ where c is constant. If and only if X has power distribution function.

DCRRQE of Order Statistics X i:n
Suppose X 1 , X 2 , ..., X n be a random sample from a population with probability density function f and cumulative distribution function F (.) and let as X 1:n ≤ X 2:n ≤ ...X n:n be the order statistics obtained by arranging the preceding random sample in increasing order of magnitude. Then the pdf of i th order statistics X i:n is given by be the beta function. The corresponding quantile-based density function of is f i:n (x) becomes Kumar and Nirdesh (2019) proposed quantile-based Rényi entropy of X i:n , which is given as and studied some properties of it. Unlike (38), S α Xi:n will be more useful in cases we do not have a tractable distribution function but have a closed quantile function. In analogy with (12), the cumulative residual Rényi entropy of i th order statistic X i:n is defined as is the survival function of the i th order statistics. The cumulative residual Rényi quantile entropy of order statistics (39) becomes where βu(i,n−i+1) β(i,n−i+1) is the quantile form of survival functionF i:n (x). In system reliability, first order statistic represents the lifetime of a series system while the n th order statistic measure the lifetime of a parallel system. For a series system (i = 1), we havȇ For the parallel system (i = n), we havȇ The residual lifetime of a system when it is still operating at time t, is (X t = X − t|X > t) which has the probability density function f (x, t) = f (x) F (t) , x ≥ t > 0. Thapliyal and Taneja (2015) studied dynamic cumulative residual Rényi entropy (DCRRE) measure for the X i:n , which is given by dx .
For i th order statistics X i:n , the quantile version of DCRRE is ξ α Xi:n (u) =ξ α Xi:n (Q(u)) = where βu(i,n−i+1) β(i,n−i+1) is the quantile form of survival functionF i:n (x) andβ x (a, b) = 1 x u a−1 (1 − u) b−1 du, 0 < x < 1, is the incomplete beta function, see David and Nagaraja (2003). An equivalent representation of (43) is of the form Differentiating (43) with respect to u both sides and after some algebraic simplification,we obtain Equation (44) provide a direct relationship between quantile density function q(u) andξ α Xi:n (u) which show that ξ α Xi:n (u) uniquely determines the underlying distribution. In system reliability, the minimum and maximum are examples of extreme order statistics and are defined by X 1:n = min{X 1 , X 2 , ..., X n } and X n:n = max{X 1 , X 2 , ..., X n }. The extreme X 1:n and X n:n are of special interest in many practical problems of distribution analysis. The extremes aries in the statistical study of floods and droughts, as well as in problems of breaking strength and fatigue failure. Substituting (i = 1) in (43) then the 898 QUANTILE-BASED CUMULATIVE RÉNYI INFORMATION MEASURE DCRRQE of first order statistic X 1:n , given as ξ α Xi:n (u) = 1 (1 − α) log 1 (β u (1, n)) α 1 u (β p (1, n)) α q(p)dp .

Proof (i) The probability density function of
From the given condition we have which gives that We can rewritten as Since 0 < α < 1 and φ is nonnegative, increasing convex (concave), we have [φ (Q(p))] 1−α is increasing (decreasing) and is nonnegative. Hence by Lemma 4.1, (48) is increasing (decreasing). This prove (i). When α > 1, [φ (Q(p))] α−1 is decreasing in p, since φ is increasing and convex. Hence we havȇ is decreasing (increasing) in u. Hence prove.

Example 7
A lambda family of distribution that is of interest in reliability is the Davis Distribution proposed by Hankin and Lee (2006) with quantile function Q(u) = Cu λ1 (1 − u) −λ2 , 0 < u < 1, C, λ 1 , λ 2 ≥ 0. This is a flexible family for right skewed on non negative data that provide good approximation to the exponential, gamma, lognormal and weibull Distribution,. A special feature of these families is that they are expressed in terms of quantile functions for which distribution function are not available in closed form to facilitate the conventional analysis. The DCRRQE entropy of sample minima for Davis Distribution is given by As λ 1 −→ 0 (50) reduces toξ α X1:n (u) = 1 (1−α) log

QUANTILE-BASED CUMULATIVE RÉNYI INFORMATION MEASURE
Next, we obtain the characterization result based on first (minima) order statistic and last (maxima) order statistic in a random sample X 1 , X 2 , ..., X n of size n from positive and continuous random variable X.

Theorem 9
Let X 1:n denote the first order statistic with survival functionF 1:n (x) and hazard quantile function K X1:n (u). Then the relationshipξ α X1:n (u) = where c is constant, holds for all u ∈ (0, 1) if and only if X is distributed as generalized Pareto distribution with quantile function Proof Consider (51) to be valid. Then Substituting K X1:n (u) = f1:n(Q(u)) 1−F (Q(u)) n = n (1−u)q(u) and simplifying, it gives Differentiating with respect to u both side and after some algebraic simplification, we have

This gives
where A is constant, which characterizes the generalized Pareto distribution. The only if part of the theorem is easy to proved.

Corollary 1
Let X 1:n denote the first order statistic with survival functionF 1:n (x) and hazard quantile function K X1:n (u) for all u ∈ (0, 1). Then the relationship holds, if and only if for (i) c = 1 α , X follows exponential distribution (ii) c < 1 α , X follows Pareto I distribution (iii) c > 1 α , X follows finite range distribution. Theorem 10 Let X 1:n denote the first order statistic with survival functionF 1:n (x). Then the relationship where C is constant, holds for all u ∈ (0, 1) if and only if X is distributed as (i) uniform distribution for C = 1 (ii) exponential distribution for C = 0 (iii) Pareto I distribution for C = −1 a

Proof
The necessity part follows from Table (38). For sufficient part, let us assume that the relationship (52) holds. From (45) and (47), we have After simplifying, we have Differentiating both side with respect to u and after some algebraic simplification, we get where K is the constant of integration. Now, if C = 1 and K = log(b − a); b > a, which implies that Q(u) = a + (b − a)u. Thus, we have the uniform distribution. If C = 0 and K = − log λ; λ ≥ 0, which implies that Q(u) = −λ −1 log(1 − u). Thus, we have the exponential distribution with parameter λ. If, C = −1 a and K = log( b a ), that a and b are positive constants, we have Q a . This means, we have the Pareto I distribution.

Theorem 11
Let X 1:n denote the first order statistic with survival functionF 1:n (x) and hazard quantile function K X1:n (u). Then the relationship given by where C is constant, holds for all u ∈ (0, 1) if and only if X is distributed (i) uniform distribution for C = 0 (ii) exponential distribution for C = 1 (iii) Pareto I distribution for C = 1 + 1 a

Proof
The necessity part follows from Table (38). For sufficiency part, let us assume that the relationship (53) holds. Substituting K X1:n (u) = n (1−u)q(u) and equation (45), we have Differentiating both side with respect to u and after some algebraic simplification, we get By the above equation, we have This gives where A is the constant of integration. Now, for C = 0, C = 1 and C = 1 + 1 a and with appropriate values of A, we obtain the desired result.

QUANTILE-BASED CUMULATIVE RÉNYI INFORMATION MEASURE
For the sample minima X 1:n , the relationship (6) becomes (1 − u)q(u) = nM X1:n (u) − (1 − u)M X1:n (u). We state a characterization result using the relationship betweenξ α X1:n (u) and M X1:n (u), the proof of which follow on the same line as given by Theorem (3.1), hence omitted.

Theorem 12
Let X 1:n denote the first order statistic with survival functionF 1:n (x) and mean residual quantile function M X1:n (u) for all u ∈ (0, 1). Then the relationship given by where c is constant, holds if and only if X has generalized Pareto distribution with quantile function

Corollary 2
Let X 1:n denote the first order statistic with survival functionF 1:n (x) and mean residual quantile function M X1:n (u). Then the relationshipξ α X1:n (u) = where c is constant, holds for all u ∈ (0, 1) if and only if for (i) c = 1 α , X follows exponential distribution (ii) c < 1 α , X follows Pareto I distribution (iii) c > 1 α , X follows finite range distribution. Let X n:n be the largest order statistic in a random sample of size n from an absolutely continuous nonnegative random variable X. Then the dynamic cumulative pastŔenyi entropy for sample maxima is as follow The quantile-based dynamic cumulative pastŔenyi entropy for X n:n can be expressed as ξ α Xn:n (u) =ξ α Xn:n (Q(u)) = For some specific univariate continuous distributions, the expression (56) is evaluated as given below in Table 4.2.
Power au In the following theorem we show that the power distribution can be characterize in terms ofξ α Xn:n (u). Theorem 13 Let X n:n denotes the last order statistic with survival functionF n:n (x) and reverse hazard quantile function K Xn:n (u), thenξ α Xn:n (u) is expressed as if and only if X has power distribution function.

Proof
The reverse hazard quantile function for sample maxima X n:n of power distribution isK Xn:n (u) = fn:n(Q(u)) Fn:n(Q(u)) = nf (Q(u)) gives the if part of the theorem. To prove the only if part, consider (57) be valid. Using (45), it gives u 0 p nα q(p)dp u nα = c K Xn:n (u) .
Differentiating both side with respect to u and simplifying, this reduces to where A is a constant. Which characterizes the power distribution for c = nb nbα+1 .
Also we have this characterization in terms ofM Xn:n (u). The proof follows on the same line as Theorem (4.6) , hence omitted.

Theorem 14
Let X n:n denotes the last order statistic with survival functionF n:n (x) and quantile version of mean inactivity time for sample maximaM Xn:n (u) thenξ α Xn:n (u) is expressed as If and only if X has power distribution function.

Weighted Rényi Quantile Entropy
Sometimes in statistical modeling, standard distributions are not suitable for our data and we need to study weighted distributions. This concept has been applied in many areas of statistics, such as analysis of family size, human heredity, world life population study, renewal theory, biomedical and statistical ecology. Associated to a random variable X with pdf f (x) and to a nonnegative real function w(x), we can define the weighted random variable X w with density function f w (x) = w(x)f (x) E(w(X)) , 0 < E(w(X)) < ∞. When w(x) = x, X w is called length (size) biased 904 QUANTILE-BASED CUMULATIVE RÉNYI INFORMATION MEASURE random variable. Using f w (x), the corresponding density quantile function is given by where µ = 1 0 w(Q(p))f (Q(p))d(Q(p)) = 1 0 w(Q(p))dp. Weighted entropy has been used to balance the amount of information and degree of homogeneity associated with a partition of data in classes. The quantile-based weightedŔenyi entropy is of the form [w(Q(p))] α (q(p)) 1−α dp .
In case of length (size) biased random variable the above expression known as length biased weighted Rényi quantile entropy, which is given as For some specific univariate continuous distributions, the expression (59) is evaluated as given below in Table 5.1.
Power au Then, ?Y is called the equilibrium random variable of the original random variable X, and its distribution as equilibrium distribution of original random variable. The equilibrium distribution arises as the limiting distribution of the forward recurrence time in a renewal process. We have f Y (Q(u)) =F (Q(u)) µ = 1−u µ . Thus quantile density function for equilibrium distribution is given by q Y (u) = 1 f Y (Q(u)) = µ 1−u . From (17), the dynamic cumulative residual Rényi quantile entropy (DCRRQE) for equilibrium distribution is given by For some well-known univariate continuous families of distributions, the expression (65) is evaluated as given below in Table 5.2.

Theorem 16
Let X be an absolutely continuous random variable. Then the relation ξ α w (X; Q(u)) = holds if and only if X follows the rayleigh distribution.
Differentiating both sides with respect to u, we have (1 − p)Q(p)q(p)dp.
Using (67), we have Differentiating (67) with respect to u both sides, we have Substituting in (69), gives dM w (u) du = 0 or equivalently M w (u) = k (constant), which characterizes the rayleigh distribution.

Theorem 17
For a nonnegative random variable X, the relationship ξ α w (X; Q(u)) = C, where C is a constant holds, then X has the rayleigh distribution.

Definition 1
The distribution function F is said to be increasing (decreasing) in dynamic weighted cumulative residual Rényi quantile entropy IDWCRRQE (DDWCRRQE) ifξ α w (X; Q(u)) is increasing (decreasing) in u ≥ 0. The following theorem gives the upper (lower) bound to the DWCRRQE, in terms of the hazard quantile function.

Conclusion
Quantile-based study of entropy measures found greater interest among researchers as an alternative method of measuring uncertainty of random variable. In this paper we have proposed dynamic cumulative residual Rényi quantile entropy and studied some properties, characterizations. We have introduced the quantile-based cumulative residual Rényi entropy of order statistics and its characterizations. We have also obtained the weighted Rényi quantile entropy and its residual form based on cumulative function and obtain some characteristic result.