Inertial Algorithms for Bifunction Harmonic Variational Inequalities

. In this paper, we introduce and study some new classes of bifunction harmonic variational inequalities. Various new and known classes of variational inequalities and complementarity problems can obtained as special classes of bifunction harmonic variational inequalities. The auxiliary principle technique is applied to suggest and analyze some hybrid inertial iterative methods for ﬁnding the approximate solutions of the bifunction harmonic variational inequalities. The convergence analysis of these iterative methods is also considered under some suitable conditions. Results proved in this paper can be viewed as a reﬁnement and improvement of the known results. It is an interesting open problem to develop some implementable numerical methods for solving these problems and to explore the applications in mathematical and engineering sciences.


Introduction
Variational inequality theory introduced by Stampacchia [50] and Fichera [8] in 1964 independently can be viewed as novel generalization of the variational principles.The variational inequalities can applied to interpret the basic principles of mathematical and physical sciences in the form of simplicity and elegance.We would like to point out that this theory describes a broad spectrum of very interesting developments involving a link among various fields of mathematics, physics, economics, regional and engineering sciences.Variational inequalities have been extended and generalized in several direction using novel and new techniques, see [2, 5, 7, 9, 11-13, 15-17, 19-37, 40-45, 47, 48, 50].There are significant developments of variational inequalities related with multivalued, nonmonotone, nonconvex optimization and structural analysis.An important and useful generalization of variational inequalities is a class of variational inequalities, which is known as harmonic variational inequality.Noor and Noor [26] have established that the optimality conditions of a differentiable harmonic convex functions on the convex functions can be characterized by an inequality, which is called the harmonic variational inequality.
Harmonic variational inequalities and related optimization problems have witnessed an explosive growth in theoretical advances, algorithmic developments and applications across almost all disciplines of engineering, pure and applied sciences.Analysis of these problems requires a blend of techniques and ideas from convex analysis, functional analysis, numerical analysis and nonsmooth analysis.
In this paper, we introduce and consider some new classes of bifunction harmonic variational inequalities.Some special cases such as complementarity problems, system of absolute value harmonic variational inequalities are obtained.There are several methods for solving variational inequalities such as fixed point, projection, resolvent and descent methods.Due to the nature of the bifunction harmonic variational inequalities, these methods can not be applied for solving harmonic variational inequalities.In recent years, the auxiliary principle technique is being used to suggest and analyze some iterative methods for solving variational inequalities and equilibrium problems.Glowinski et.al. [9] used this technique to study the existence problem for mixed variational inequalities.Noor [15,18,[20][21][22][23], Noor et.al. [12, 24-29, 34, 36, 37, 41, 43, 44], Patrickson [48] and Zhu et.al. [51] have used this approach to suggest and analyze some iterative methods for solving various classes of variational inequalities and their variant forms..In this paper, we again use this technique to suggest a class of iterative schemes for harmonic variational inequalities problems for bifunction.We also prove that the convergence of these methods require either pseudomonotonicity or partially relaxed strongly monotonicity.These are weaker conditions than monotonicity.As special cases, we obtain new iterative schemes for solving variational inequalities and equilibrium problems.The comparison of these methods with other methods is a subject of future research.

Preliminaries
Let H be a real Hilbert space, whose inner product and norm are denoted by ., .and .respectively.Let C ⊆ H be a nonempty closed convex set.Let f : H −→ R be a locally Lipschitz continuous function.Let Ω be an open bounded subset of R n .First of all, we recall the following concepts and results from nonsmooth analysis, see [4,6].Definition 2.1.[4] Let f be locally Lipschitz continuous at a given point x ∈ H and v be any other vector in H.The Clarke's generalized directional derivative of f at x in the direction v, denoted by f 0 (x, v), is defined as The generalized gradient of f at x, denoted ∂ f (x), is defined to be subdifferential of the function f 0 (x; v) at 0. That is If f is convex on C and locally Lipschitz continuous at x ∈ C, then ∂ f (x) coincides with the subdifferential f (x) of f at x in the sense of convex analysis , and f 0 (x; v) coincides with the directional derivative Definition 2.3.[4] The function φ on the harmonic convex set C h is said to be harmonic convex, if The function φ is said to be harmonic concave function, if and only if, −φ is harmonic convex function.

Definition 2.4. A function f
If the strongly harmonic convex function is differentiable, then and conversely.
We recall that the minimum of a differentiable harmonic convex function on the harmonic convex set C h can be characterized by the variational inequality.This result is due to Noor and Noor [26].
Theorem 2.1.Let φ be a differentiable harmonic convex function on the harmonic convex set C h .Then u ∈ C h is a minimum of φ, if and only if, u ∈ C h satisfies the inequality The inequality of type (2.1) is called the harmonic variational inequality.
Proof.Let u ∈ C h is a minimum of differentiable harmonic convex function φ.Then 2) and diving by λ and taking limit as λ → 0, we have the required result (2.1).Conversely, let the function φ be harmonic convex function on the harmonic convex set C h .Then

1).
Consequently, it follows that This shows that u ∈ C h is the minimum of the differentiability harmonic convex function.
We would like to mention that Theorem 2.1 implies that harmonic optimization programming problem can be studied via the harmonic variational inequality (2.1).
Using the ideas and techniques of Theorem 2.1, we can derive the following result.
Theorem 2.2.Let φ be a differentiable harmonic convex functions on the harmonic convex set C h .Then In many cases, harmonic variational inequalities may not aries as the optimality conditions of the differentiable harmonic convex functions.This motivated us to consider the more general problem, which included the problem (2.1) as a special case.
Special Cases.We now discuss some special cases of the problem (2.3): is called the bifunction harmonic hemivariational inequality.
(II).For Φ(u, A(u), uv u−v ) = f 0 (u; uv u−v ), ∀v ∈ C h , the problem (2.3) reduces to Here f 0 (u; uv u−v ) denotes the generalized directional derivative of the function f (u) at u in the direction uv u−v .Such type of nonlinear functions f arise in the structural analysis, see [46,47].Problem of type (2.5) is called the bifunction harmonic hemivariational inequality.Panagiotopoulos [46,47] studied the hemivariational inequalities to formulate variational principles connected to energy functions which are neither convex nor smooth.It is has been shown that the technique of hemivariational inequalities is very efficient to describe the behaviour of complex structure arising in engineering and industrial sciences.references therein.
which is known as the mildly nonlinear bifunction harmonic variational inequality and appear to be a new one.
, where B(., .) and W(., .)are continuous bifunctions, then problem (2.3) is equivalent to finding u ∈ C h such that which is called the bifunction directional harmonic variational inequality.
which is known as the harmonic variational inequality involving the sum of two monotone operators introduced and studied by Noor et.al. [28,29,[41][42][43].
is called the harmonic complementarity problem, which appears to be a new one.For the applications, numerical methods and other aspects of linear and nonlinear complementarity problems, see [5,11,14,16,17,29,41,45] and the references therein.
(VIII).For Φ(A(u), uv u−v ) = 0, then problem (2.3) reduces to finding u ∈ C h such that which is called the bifunction harmonic variational inequality.In brief, for suitable and appropriate choice of the bifunctions, one can obtain several classes of harmonic variational inequalities, complementarity problems, absolute value harmonic inequalities and harmonic optimization problems.This clear shows that the problem (2.3) is more general and flexible and includes the previous ones as special cases.
Definition 2.5.The bifunction F(., .) and the operator T is said to be: (a) jointly monotone with respect to Φ(., .), if (b) jointly pseudomonotone with respect to Φ(., .), if (c) partially relaxed strongly jointly monotone with respect to Φ(., .), if there exists a constant γ > 0 such that Note that for z = u partially relaxed strongly jointly monotonicity reduces to jointly monotonicity.This shows that partially relaxed strongly jointly monotonicity implies jointly monotonicity, but the converse is not true.
Let f be a differentiable harmonic convex function.The bifunction B(., .) is called If the functions E is differentiable strongly harmonic convex function, then Applying the Lemma 2.2, we introduce the following new general distance function and for the strongly monotone operator M with constant β > 0, if We give the following important examples of some practical important types of harmonic convex function f and their corresponding Bregman distance functions.

Examples
(1) For convex function is the squared Euclidean Bregman distance function (SE).
(2) If the Shannon entropy f (v) = n i=1 v i log v i is a differentiable harmonic convex function, then its corresponding harmonic Bregman distance function is given as This distance is called the harmonic Kullback-Leibler distance (KL), which may play a very important tool in several areas of applied mathematics such as information, data analysis and machine learning.
(3) If the Burg entropy f (v) = − n I=1 log v i is a differentiable harmonic functions, then its corresponding harmonic Bregman distance function given as is called the harmonic Itakura-Saito distance (IS) and appears to be new one.It is not symmetric, that is, B(v, u) B(u, v).One of the advantages of the Itakura-Saito divergence is its scale invariance which means that B(λv, λu) = B(u, v), for any number λ.This makes it a very suitable measure for the comparison of audio spectra.One can explore the applications of this harmonic Bregmann distance function in data analysis, the information theory and machine learning.These harmonic Bregaman distance functions may inspire further research and applications in various branches of risk analysis, transportation, computer aided design, random, quantum calculus, fuzzy systems and other related optimization programming problems.
Remark 2.1.It is a challenging problem to explore the applications of harmonic Bregman distance function for other types of nonconvex functions such as biconvex, k-convex functions, preinvex functions and other variant forms of convex functions.
For a given u ∈ C h , satisfying (2.3), consider the auxiliary of finding w ∈ C h such that is called the auxiliary harmonic variational inequality.where ρ > 0 is a constant and E (u) is the differential of a strongly harmonic convex function E(u) at u ∈ K. Clearly, if w = u, then clearly w is solution of the problem (2.3).This observation enables us to suggest and analyze the following iterative method for solving (2.3).
Algorithm 3.1.For a given u 0 ∈ H, compute the approximate solution u n+1 by the iterative scheme where rho ≥ 0, ηin[0, 1] are constants.Algorithm 3.1 is called the hybrid proximal point method.
For η = 0, Algorithm 3.1 reduces to the method for solving the problem (2.3).
Algorithm 3.2.For a given u 0 ∈ H, compute the approximate solution u n+1 by the iterative scheme where ρ > 0 is a constant.Algorithm 3.2 is called the proximal method for solving problem (2.3).

collapses to:
Algorithm 3.4.For a given u 0 ∈ H, compute the approximate solution u n+1 by the iterative scheme (III).If Φ(u, A(u), uv u−v ) = 0, then Algorithm 3.2 collapses to: Algorithm 3.5.For a given u 0 ∈ H, compute the approximate solution u n+1 by the iterative scheme (IV).For W(Tu, uv u−v ) = 0, Algorithm 3.4 reduces to: Algorithm 3.6.For a given u 0 ∈ H, compute the approximate solution u n+1 by the iterative scheme In brief, for suitable and appropriate choice of the operators and the spaces, one can obtain a number of known and new algorithms for solving variational-like inequalities and related problems.
We now consider the convergence analysis of Algorithm 3.2.
Theorem 3.1.Let F(., .) and the operator T be jointly pseudomonotone with respect to Φ(., .).Let E be differentiable strongly harmonic convex function with module β > 0. Then the approximate solution u n+1 obtained from Algorithm 3.2 converges to a solution u ∈ C h satisfying (2.3).
Taking v = u in (3.3) and v = u n+1 in (3.6), we have ). (3.7) and We now consider the Bregman function by using the strongly harmonic convexity of E.
If u n+1 = u n , then clearly u n is a solution of the bifunction hemivariational inequality (2.3).
Otherwise, it follows that B(u, u n ) − B(u, u n+1 ) is nonnegative and we must have lim n→∞ u n+1 − u n = 0. Now using the technique of Zhu and Marcotte [51], it can be shown that the entire sequence {u n } converges to the cluster point u satisfying the bifunction harmonic variational inequality (2.3).
It is well-known that to implement the proximal point methods, one has to find the approximate solution implicitly, which is itself a difficult problem.To overcome this drawback, we now consider another method for solving (2.3), which is a special case of Algorithm 3.1.
(V).For η = 1, Algorithm 3.1 reduces to Algorithm 3.7.For a given u 0 ∈ H, compute the approximate solution u n+1 by the iterative scheme which is called the implicit iterative method for solving the problem (2.3).
To implement Algorithm 3.7, we use the predictor corrector technique to suggest the following two step method for solving the problem (2.3).
Algorithm 3.8.For a given u 0 ∈ H, compute the approximate solution u n+1 by the iterative scheme ), ∀v ∈ C h .
(VII).If F(Tu, uv u−v ) = Tu, uv u−v and W(Au, uv u−v ) = Au, uv u−v , then Algorithm 3.9 collapses to the method for solving the harmonic variational inequalities (2.4).Algorithm 3.10.For a given u 0 ∈ H, compute the approximate solution u n+1 by the iterative scheme , ∀v ∈ C h .
(VIII).If Φ(A(u), uv u−v ) = 0, then Algorithm 3.7 reduces to the following iterative method for solving with bifunction harmonic variational (2.11).Algorithm 3.11.For a given u 0 ∈ H, compute the approximate solution u n+1 by the iterative scheme Using the technique of Theorem 3.1, one consider the convergence analysis of Algorithm 3.7.
(IX).If η = 1 2 , then Algorithm 3.1 collapses to finding u ∈ H such that Algorithm 3.12.For a given u 0 ∈ H, compute the approximate solution u n+1 by the iterative scheme ρF(T( Algorithm 3.12 is called the hybrid mid-point proximal point method.
In a similar way, for suitable and appropriate choice of the operators and the spaces, one can obtain various known and new algorithms for solving bifunction harmonic variational inequalities and their variant forms.
We again apply the auxiliary principle technique to consider some more approximate schemes for solving the problem (2.3), which does not involve the Bregman function technique.
For a given u ∈ C h satisfying (2.3), consider the problem of finding w ∈ C h such that where ρ > 0, η ∈ [0, 1] are constants.Inequality of type (3.13) is called the auxiliary bifunction harmonic variational inequality.Clearly, if w = u, then w is a solution of (2.3).This simple observation enables us to suggest the following iterative method for solving (2.3).
Algorithm 3.13.For a given u 0 ∈ H, compute the approximate solution u n+1 by the iterative scheme Algorithm 3.13 is called the hybrid proximal point algorithm for solving the problem(2.3).

Special Cases
We now consider some cases of Algorithm 3.13.
û ≥ 0, ∀v ∈ C h , which implies that û solves the bifunction harmonic variational inequality (2.3) and Thus, it follows from the above inequality that {u n } ∞ 0 has exactly one limit point û and lim n→∞ (u n ) = û. the required result.
We again consider the auxiliary principle technique to suggest some hybrid inertial proximal point methods for solving the problem (2.3).
Clearly, for w = u, w is a solution of (2.3).This fact motivated us to suggest the following inertial iterative method for solving (2.3).Algorithm 3.18.For given u 0 , u 1 ∈ H, compute the approximate solution u n+1 by the iterative scheme which is known as the hybrid inertial iterative method.
Note that for α = 1, Algorithm 3.18 is exactly the Algorithm 3.14.
(IW).If η = 1, then Algorithm 3.18 reduces to: Algorithm 3.19.For given u 0 , u 1 ∈ H, compute the approximate solution u n+1 by the iterative scheme which is known as the explicit inertial iterative method.
Using essentially the technique of Theorem 3.2, Theorem 3.3 and Noor [10], one can study the convergence analysis of Algorithm 3.21.
We now apply the auxiliary principle technique involving an arbitrary operator M to consider some hybrid approximate schemes for solving the problem (2.3), which contain the Bregman function technique as special case.This technique was introduced and considered in Noor [17].
For a given u ∈ C h satisfying (2.3), consider the problem of finding w ∈ C h such that where ρ > 0, α > 0, η ∈ [0, 1] are constants and M is an arbitrary operator.
Clearly, for w = u, w is a solution of (2.3).This fact motivated us to suggest the following inertial iterative method for solving (2.3).Algorithm 3.22.For given u 0 , u 1 ∈ H, compute the approximate solution u n+1 by the iterative scheme which is known as the hybrid inertial proximal point method.
If M(u) = E (u), the problem (3.24) reduces to the following new auxiliary problem, which contains the problem (3.13) as a special case.where ρ > 0, α > 0, η ∈ [0, 1] are constants and E is the differential of the strongly harmonic convex function.
Clearly, for w = u, w is a solution of (2.3).This fact motivated us to suggest the following inertial iterative method for solving (2.3).which is known as the hybrid inertial proximal point method.
Remark 3.1.For different and appropriate values of the parameters η, α, bifunctions F(., .),Φ(., .),operators T, A, M harmonic convex set C h and spaces, we can obtain a wide class of inertial type iterative methods for solving the harmonic variational inequalities and related optimization problems.This shows that proposed Algorithms are quite flexible, unified and general ones.

Conclusion.
We have considered and investigated some new classes of bifunction harmonic variational inequalities in this paper.It is shown that several important problems such as harmonic complementarity problems, system of harmonic absolute value problems and related problems can be obtained as special cases.The auxiliary principle technique is applied to suggest several hybrid inertial type methods for finding the approximate solution of bifunction harmonic variational inequalities.Convergence criteria of the proposed methods is investigated under suitable weaker conditions.We note that this technique is independent of the projection and the resolvent of the operator.It is an interesting open problem to explore the applications of harmonic variational inequalities in various branches of pure and applied sciences and develop numerical implementable methods.It is an interesting problem to implement these methods numerically and compare with other iterative schemes.
Authors Contributions: All authors contributed equally,design the work; review and agreed to the publish version of the manuscript.

Theorem 3 . 3 .
Let H be a finite dimensional space and all the assumptions of Theorem 3.2 hold.Then the sequence {u n } ∞ 0 given by Algorithm 3.14 converges to a solution u of (2.3).Proof.Let u ∈ C h be a solution of (2.3).From(3.15), it follows that the sequence {||u − u n ||} is nonincreasing and consequently {u n } is bounded.Furthermore, we have∞ n=0 u n+1 − u n 2 ≤ u 0 − u 2 , which implies that lim n→∞ u n+1 − u n = 0. (3.22)Let û be the limit point of {u n } ∞ 0 ; a subsequence {u n j } ∞ 1 of {u n } ∞ 0 converges to û ∈ H.Replacing w n by u n j in (3.15), taking the limit n j −→ ∞ and using(3.22),we have
which is called the modified distance function.It is an interesting open problem to explore the applications of this equivalent modified distance function in information sciences, data analysis, machine learning, artificial intelligence, optimization and variational inequalities.It is important to emphasize that various types of function f give different harmonic Bregman distance function.