Some New Types of Convergence Deﬁnitions for Random Variable Sequences

. In this paper, we introduce the concepts of invariant convergence in probability, statistically invariant convergence in probability, invariant convergence almost surely, invariant convergence in distribution and invariant convergence in L p - norm for sequences of random variables. Also, we investigate some inclusion relations.


Introduction
In probability theory, we know several different notions of convergence of random variables. The convergence of sequences of random variables to some limit random variable is an essential concept in probability theory and its applications to statistics.
The natural density of a set K of positive integers is defined by where |{k ≤ n : k ∈ K}| denotes the number of elements of K not exceeding n.
The concept of statistical convergence of number sequences was introduced by Fast [5]. In [25] Schoenberg established some basic features of statistical convergence and studied the notion as a summability method.
A continuous linear functional ϕ on ∞ , the space of real bounded sequences, is said to be a invariant limit if (a) ϕ(x) ≥ 0 when the sequence x = (x n ) has x n ≥ 0 for all n, (c) ϕ(x σ(m) ) = ϕ(x) for all x ∈ l ∞ , where σ : N + → N + . A sequence x ∈ l ∞ is said to be invariant convergent to if all of its invariant limits are equal to .
The mappings σ are one-to-one and such that σ k (m) = m for all k, m ∈ N + and σ k (m) = σ(σ k−1 (m). Thus ϕ extends the limit functional on the space of convergent sequences, in the sense that ϕ(x) = lim x for all convergent sequences x. In the case σ is the translation mapping σ(m) = m + 1, the σ-mean is often called a Banach limit and V σ , the set of bounded sequences all of whose invariant means are equal, is the set of almost convergent sequences.
It can be shown that x σ k (m) = uniformly in m}.
In this case, we write σ − lim x n = . The set of all strongly invariant convergent sequences is denoted by In [18], the concept of σ-statistically convergent sequence was introduced as follows: A sequence x = (x k ) is said to be σ-statistically convergent to if for every > 0, Throughout the paper, {X n } will denote a sequence of random variables where each X n is defined on the same sample space W with respect to a given class of events S and a given probability function The concept of statistical convergence in probability of a sequence of random variables {X n } was introduced in [8] as follows: A sequence of random variables {X n } is said to be statistically convergent in probability to a random variable X if for any ε, δ > 0 The reader can also see [9][10][11] for related works.
A sequence of random variables {X n } is said to be strongly Cesàro summable in probability to a random variable X if for each ε > 0, Let {X n } be a sequence of random variables. We say {X n } is bounded in probability, if for every For more information on the concepts covered in this article, see articles [1,2,[8][9][10][11][12]27] on random variables, articles [3][4][5][6][7], [13]- [26] on invariant convergence and statistical convergence.

Some new types of invariant convergence
Let denote the set of all σ mapping by M. Then is equal to the set of all convergent sequences (see [21]). Therefore, the definitions and theorems we will give in this section are meaningful. The definitions we will give are more general, and the kinds of convergence previously known for random variable sequences can be derived from the definitions we give.
The class of all sequences of random variables which are invariant convergent in probability will be denoted by V P σ .
V P σ : σ ∈ M is equal to the set of all random variable sequences which are convergent in probability.

Statistically invariant convergence in probability.
Definition 2.2. The sequence {X n } is said to be statistically invariant convergent in probability to a random variable X if for any ε, δ > 0 uniformly in m. In this case, we write X n → X(S P σ ). The class of all sequences of random variables which are statistically invariant convergent in probability will be denoted by S P σ .
{S P σ : σ ∈ M} is equal to the set of all random variable sequences which are statistically convergent in probability.
Theorem 2.1. If a sequence of random variables {X n } is strongly invariant convergent in probability to X, then {X n } is statistically invariant convergent in probability to X.
Proof. Suppose that {X n }is strongly invariant convergent in probability to X . For an arbitrary ε > 0, δ, uniformly in m, that is, {X n } is statistically invariant convergent in probability to X.
Theorem 2.2. If a sequence of random variables {X n } is statistically invariant convergent in probability to X and bounded in probability, then {X n } is strongly invariant convergent in probability to X.
Proof. Suppose that {X n } is statistically invariant convergent in probability to X and {X n } is bounded, say P (|X σ k (m) − X| > ε) < M ε for all m and k. Given ε, δ > 0, we get for each m, hence we have

Theorem 2.3. Suppose that X n is invariant convergent in probability to a random variable X and that
f is a continuous function. Then {f (X n )} is invariant convergent in probability to f (X).
Since {X n } is invariant convergent in probability to the random variable X, then uniformly in m.

2.3.
Invariant convergence almost surely. We say an event happens "surely" if it always happens.
We say an event happens "almost surely" if it happens with probability 1. Almost sure convergence is one of the main modes of stochastic convergence. It may be viewed as a notion of convergence for random variables that's analogous to, but not the same as, the notion of pointwise convergence for real functions. Now we will introduce invariant convergent almost surely random variable sequence. that is, We will denote the set of all sequences of random variables {X n } which are invariant convergent almost surely by AS σ .
is equal to the set of all random variable sequences which are convergent almost surely.
Theorem 2.4. If a sequence of random variables {X n } is invariant convergent almost surely to a random variable X, then {X n } is invariant convergent in probability to X.
Since {X n } is invariant convergent almost surely to a random variable X, for every s ≥ n, we have σ − lim P (|X s − X| < ε) = 1.

Invariant convergence in distribution.
Definition 2.4. We say that a sequence of random variables {X n } is invariant convergent in distribution to a random variable X if uniformly in m, that is for all x at which F X (x) is continuous. The set of all sequences of random variables {X n } which are invariant convergent in distribution will be denoted ID σ .
{ID σ : σ ∈ M} is equal to the set of all random variable sequences which are convergent in distribution.
Theorem 2.5. If a sequence of random variables {X n } is invariant convergent in distribution to a number a, then {X n } is invariant convergent in probability to a.
Proof. Since {X n } is invariant convergent in distribution to the number a, for any ε > 0, we have We can write for any ε > 0, means {X n } is invariant convergent in probability to a. uniformly in m, that is, If p = 2, it is called the mean-square invariant convergence. The set of all invariant convergent in the pth mean or in the L p -norm will be denoted by E p σ .
is equal to the set of all random variable sequences that are convergent in the pth mean or in the L p -norm.
Theorem 2.6. If a sequence of random variables {X n } is invariant convergent in the L p , (p ≥ 1)norm to X, then {X n } is invariant convergent in probability to X.
Proof. For any ε > 0 and for each m, we can write P (|X n − X| ≥ ε) = P (|X n − X| p ≥ ε p )( since p ≥ 1) In case σ(m) = m + 1, we have the concepts of almost convergence in probability, statistically almost convergence in probability, almost convergence almost surely, almost convergence in distribution and almost convergence in L p -norm from the Definitions 2. valid between these concepts. These definitions and theorems have not been seen elsewhere until now.