Modeling for Collapsing Cavitation Bubble near Rough Solid Wall by Mulit-Relaxation-Time Pseudopotential Lattice Boltzmann Model

Cavitation bubble collapse near rough solid wall is modeled by the multirelaxation-time (MRT) pseudopotential lattice Boltzmann (LB) model. The modified forcing scheme, which can achieve LB model’s thermodynamic consistency by tuning a parameter related with the particle interaction range, is adopted to achieve desired stability and density ratio. The bubble collapse near rough solid wall was simulated by the improved MRT pseudopotential LB model. The mechanism of bubble collapse is studied by investigating the bubble profiles, pressure field and velocity field evolution. The eroding effects of collapsing bubble are analyzed in details. It is found that the process and the effect of the interaction between bubble collapse and rough solid wall are affected seriously by the geometry of solid boundary. At the same time, it demonstrates that the MRT pseudopotential LB model is a potential tool for the investigation of the interaction mechanism between the collapsing bubble and complex geometry boundary.

3), we refer to [2]- [12] and the references cited therein. In Section 2, we present some basic lemmas. In Section 3, we establish oscillation criteria for oscillation of all solutions of the system (1.1). Examples are given in Section 4 to illustrate our theorems. Next, we state a lemma whose proof can be found in [1]. Lemma 2.3. Assume that { } n a is a non negative real sequence and not identically zero for infinitely many values of n and l is a positive integer. If , n l n n l n x a z a n n N n k − + − + ≥ ≥ ∈ (3.6) summing the second equation of (1.1) from n to i, using (3.5) and then letting i → ∞ , we obtain 1 5 , . Case (2). The proof of case (2) is similar to that of the above theorem and hence the details are omitted. The proof is now complete. Case 2. The proof for this case is similar to that of Theorem (3.1). Here we use condition (3.16) instead of condition (2.1). The proof is complete.

Examples
Example 4.1. Consider the difference system

Introduction
The Standard Model (SM) of particle physics is supported by two theoretical "pillars": the gauge principle and the Higgs mechanism for particle mass generation. In this theoretical model, the mass of a particle depends on its interaction with the Higgs field, a medium that permeates the universe. The SM predicts the existence of a neutral spin-0 particle (the Higgs boson) associated with the Higgs field, but it does not predict its mass. Whereas the gauge principle has been firmly established through precision electroweak measurements, the Higgs me- chanism is yet to be fully tested.

R. Belusevic
The Higgs-boson mass, H m , affects the values of electroweak observables through radiative corrections. Many of the electroweak measurements obtained so far may be combined to provide a global test of consistency with the SM. The best constraint on H m is obtained by making a global fit to the electroweak data. Such a fit is consistent with the value of the Higgs mass measured at the Large Hadron Collider (LHC) [1] [2].
To discover a new particle (such as the Higgs boson), or to search for physics beyond the SM, usually requires the use of high-energy hadron or electron-positron colliders. However, many important discoveries in particle physics have been made using proton beams with relatively low energies but high intensities

The Proposed Proton/Electron Facility at KEK
The main "bottleneck" limiting the beam power in circular machines is caused by space charge effects that produce beam instabilities. Such a "bottleneck" exists at the J-PARC proton synchrotron complex, and is also intrinsic to the "proton drivers" envisaged at CERN and Fermilab. To increase maximally the beam power of a "proton driver", it is proposed to build a facility consisting solely of a low-energy injector linac and a high-energy pulsed superconducting linac.
Pulsed operation is preferred over the CW mode (continuous wave, 100% duty) mainly because the former allows the use of rf cavities with high accelerating gradients. This would considerably reduce the overall length of the machine, which is limited by the size of the KEK site.
The layout of the proposed proton/electron facility at KEK is shown in Figure   1. A 2.5 GeV proton linac (PI) serves both as an injector to a superconducting The beam parameters in Table 1

Main Characteristics of an ILC-Type Linac
The main characteristics of a linear accelerator are determined by the properties of its rf source (klystrons) and accelerating cavities. For a pulsed ILC-type superconducting linac, one of the currently available rf sources is the Toshiba E3736 Multi-Beam Klystron [3]. This source has the following well-tested specifications: rf frequency-1.3 GHz; peak rf power-10 MW; average power-150 kW; efficiency-65%; pulse length-1.5 ms; repetition rate-10 Hz. If the repetition rate of the Toshiba klystron is increased by a factor of two, while its peak power is reduced by the same factor (thus keeping the average power constant) one obtains the klystron specifications presented in Table 1. For such a klystron, a suitable 20 Hz pulse modulator has to be developed. A very important parameter that determines, to a large extent, the power conversion efficiency of a klystron is its perveance, defined by In this expression, 0 I is the beam current and U is the anode voltage.
Since there is an upper limit to the applied voltage, low perveance can be only obtained by operating with low currents. For single-beam klystrons, this requirement is not compatible with the need for high output power. With this in mind, multi-beam klystrons (MBK) were originally developed in the 1960s [4]. An MBK is a parallel assembly of low-current (low-perveance) beamlets within a common rf structure, which efficiently generates high output power. Using Equation (3), the output rf power of an MBK can be expressed as In order to increase the beam power of the SCL, R could be increased to 20 Hz (see Equation (1)). As already mentioned, the peak power of the klystron would then have to be reduced to 5 MW, so that its average power is still 150 kW for the klystron pulse length 1.5 ms τ = (see Table 1).
The basic properties of a 1.3 GHz superconducting ILC-type cavity are presented, e.g., in [5]. There are two important parameters that characterize rf cavities: the accelerating gradient acc E and the unloaded quality factor 0 Q . The former is a measure of how much the energy of a particle is increased over a given length of the linac (typically expressed in units of MV/m), while the latter specifies how well the cavity can sustain the stored rf power. A higher value of 0 Q implies a lower rate of power loss realtive to the stored energy 1 . ILC-type cavities must have a nominal 0 Q greater than 10 1 10 × (a dimensionless parameter) at acc E 31.5 MV m = .
Each ILC-type cryomodule for the proposed SCL would contain eight niobium 9-cell cavities and a quadrupole magnet at its centre. Other major components of such a cryomodule are the vacuum vessel, thermal and magnetic shields, cryogenic piping, interconnections, etc. The inactive spaces between cavities or cryomodules (the "packing fraction") are responsible for a substantial reduction in the average accelerating gradient of the linac. The beam physics and the lattice design of the superconducting L-band linac described in [6] are applicable to the SCL.
The average usable accelerating gradient in ILC-type cavities is acc E 29.3 5.1 MV m = ± [7]. Taking into account an estimated linac "packing fraction" of about 70%, the effective accelerating gradient of the SCL is eff E 20 MV m ≈ . Hence, the total length of a 20 GeV linac is ~1000 m.
Since the length of an ILC 9-cell cavity is 1 m, a linac with b E 20 GeV  is generally lower in the CW mode of operation than in the pulsed mode. 1 The Q factor of an rf cavity is defined as Q = 2π× (energy stored/energy dissipated per cycle). For large values of Q, the Q factor is approximately the number of oscillations required for the energy of a freely oscillating system to fall off to e −2π , or 0.2%, of its original value.

Proton Injector (PI)
A typical ~1 GeV proton linear accelerator consists of three main sections: • Front end, comprising a proton source and a radiofrequency quadrupole accelerator; • Medium-velocity linac, which accelerates proton beams to ~100 MeV; • High-velocity linac, which accelerates protons to energies exceeding 1 GeV.
The most complex part of a proton linac is the low-energy (low-β) section, situated between the proton (or ion) source and the first drift-tube-based accelerating section. The continuous beam of protons coming from an electron cyclotron resonance (ECR) source [8] has to be focused, bunched and accelerated in the first rf structure. These three essential functions are nowadays successfully performed by radio frequency quadrupoles (RFQ) [9]. However, the beam has to be shrunk before it can be fed into an RFQ. This is accomplished within a low-energy beam transport (LEBT) section by means of cylindrical magnets (solenoids).
As soon as the beam is bunched---which is essential for further accelerationit enters a medium-energy beam transport (MEBT) section, where it is collimated and steered from the RFQ into the medium-velocity linac (MVL). The MEBT may also contain a number of buncher cavities. Inside the MVL, the beam is accelerated to about 100 MeV (~0.1 β to 0.5). The MVL usually contains normal-conducting drift-tube linac (DTL) and cell-coupled drift tube linac (CCDTL) structures. A DTL incorporates accelerating components of increasing length in order to match precisely the increase in beam velocity, while quadrupole magnets provide strong focusing. The main advantage of using CCDTL structures is that they provide longitudinal field stability.
High-velocity linac (HVL) structures accelerate the beam to energies around 1 GeV. They consist either of normal-conducting side-coupled linac (SCL) structures 2 or superconducting elliptical cavities. The latter offer some advantages over the former, such as higher accelerating gradients and lower operating costs.
The superconducting HVL can also feature spoke resonators, characterized by their simplicity, high mechanical stability and compact size [10].
The European Spallation Source (ESS) is an example of a typical ~1 GeV proton linear accelerator [11] (see Figure 4). The transverse beam size along the linac varies in the range 1 -4 mm, and the bunch length decreases from 1.2 cm to 3 mm towards the end of the linac.
One of the main concerns in the design of a high-power proton linac is to restrict beam losses. A careful beam dynamics study is therefore needed in order to avoid halo formation, a major source of beam loss. Another important issue is the preservation of beam emittance [6].
High-power proton linear accelerators have a wide range of applications including spallation neutron sources, nuclear waste transmutation, production of radioisotopes for medical use, etc. A number of laboratories worldwide have ex- 2 The main reason for using these π/2-mode structures is that long chains of coupled cavities are often required for an efficient use of high-power rf sources [10]. Figure 4. Block diagram of the ESS linac [11]. The RFQ and DTL structures are normalconducting, while the spoke resonator and elliptical cavities are superconducting.

Physics at the Proposed Facility
The physics potential of a multi-MW "proton driver" is extensively discussed, for instance, in [14]. This note is mainly concerned with the application of a high-intensity proton source to investigate the properties of long-baseline neutrino oscillations. A unique feature of the proposed facility is the use of a superconducting linac to accelerate both protons and electrons, which considerably increases its physics potential. As described in [15] [16], polarized electron and positron beams can be used to study the structure of composite particles and the dynamics of strong interactions, as well as to search for new physics beyond the Standard Model.

Neutrino Flavor Oscillations and Leptonic CP Violation
The observed transformation of one neutrino flavor into another ("neutrino oscillation") indicates that these particles have finite masses. Cosmological data suggest that the combined mass of all three neutrino species is a million times smaller than that of the next-lightest particle, the electron [17]. The phenomenon of neutrino oscillations implies not only the existence of neutrino mass, but also of neutrino mixing. That is, the neutrinos of definite flavor are not particles of definite mass (mass eigenstates), but coherent quantum-mechanical superpositions of such states. Converesely, each neutrino of definite mass is a superposition of neutrinos of definite flavor. Neutrino mixing is large, in striking contrast to quark mixing. Whatever the origin of the observed neutrino masses and mixings, it implies a profound modification of the Standard Model.
Mathematically, the phenomenon of neutrino mixing can be expressed as a unitary transformation relating the flavor and mass eigenstates. The neutrino oscillation rate depends, in part, on (1) the difference between neutrino masses and (2) the three parameters in the transformation matrix known as mixing angles. The complex phase factors in the transformation matrix (also called mixing matrix) are associated with the violation of CP symmetry in the lepton sector.
The size of the CP violation is determined both by the phases and the mixing angles.
Experiments with high-intensity neutrino beams are designed primarily to explore the mass spectrum of the neutrinos and their properties under the CP symmetry, and thus provide a deeper insight into the nature of these elusive par-ticles and their role in the universe. For instance, if there is experimental evidence for CP violation in neutrino oscillations, it could be used to explain the observed asymmetry between matter and antimatter [18].
To search for CP violation in neutrino oscillations, a 100 kiloton water Cherenkov detector could be built at Okinoshima, located at a distance 2 L 650 km ≈ from KEK. Using the proposed KEK "proton driver", the detector at Okinoshima and the existing 0.022 Mt Super-Kamiokande detector (situated at a distance 1 L 300 km ≈ from KEK), the neutrino mass hierarchy could be determined either by comparing the e ν appearance probabilities measured at the two vastly different baseline lengths L 1 and L 2 , or by measuring at L 1 and L 2 the neutrino energy of the first oscillation maximum. Once the mass hierarchy is determined, the CP-violating phase in the mixing matrix can be measured with a precision of 20 ±  , assuming that ν and e ν beams [19].

Proton Target and Magnetic Horn
The main challenge in the design of a multi-MW neutrino beam facility is to build a proton target that could dissipate large amounts of deposited energy, withstand the strong pressure waves created by short beam pulses, and survive long-term effects of radiation damage. Simulation studies of the pion production and energy deposition in different targets (liquid mercury jet, tungsten powder jet, solid tungsten bars and gallium liquid jet) are presented in [20]. Those studies also provide estimates of the amount of concrete shielding needed to protect the environment from the high radiation generated by each target. A proof-ofprinciple demonstration of a 4 MW target station comprising a liquid mercury jet inside a 20 T solenoidal magnetic field is described in [21]. Alternatively, one could use a rotating, gas-cooled tungsten target that would require the least amount of development effort, and would also have good thermal and mechanical properties [11]. A 15 MW proton beam could be separated by a series of magnets into four beam lines. Each of the four beams would be focused by a series of quadrupoles and correctors to an assembly consisting of four targets and the same number of magnetic horns (see, e.g., [22]).
To maximize the discovery potential of a neutrino beam facility, it is important to properly design the magnetic horn that focuses the charged particles produced in the proton target. For proton beam pulses lasting 1 ms, a DC horn has been designed at KEK by Yukihide Kamiya [23].

Physics with Polarized Electrons and Positrons
Electron and positron beams, polarized and/or unpolarized, can be used to study the structure of composite particles and the dynamics of strong interactions, as well as to search for new physics beyond the Standard Model. A detailed description of the physics potential of a facility that can provide such beams (e.g., the upgraded CEBAF facility at Jefferson Lab or the proposed KEK superconducting linac) is presented in [15] [16].
Polarized positrons are created in a conversion target by circularly polarized photons, which themselves are produced when polarized laser light is Comptonbackscattered on a high-energy electron beam [24]. Circularly polarized photons can also be produced by bremsstrahlung from polarized electrons [25]. Using polarized electrons and positrons, the nucleon electromagnetic form factors and generalized parton distributions can be determined in a model-independent way [16].
Among the physics topics discussed in [15], parity violation in electron-elec- ), the parity-violating asymmetry, A, is dominated by the interference between the electromagnetic and neutral weak amplitudes [26]. By definition, In this expression, d R σ ( d L σ ) is the differential cross-section for right-handed (left-handed) electron scattering on an unpolarized target: where f γ and , R L Z f are the scattering amplitudes with γ and Z exchange, respectively. From the four Feynman diagrams in Figure 6 of [27], one can readily obtain the Born amplitudes for Mø ller scattering mediated by photons and Z bosons. The weak neutral current amplitudes are functions of the weak mixing (or Wienberg) angle w θ , which relates the weak coupling constants w g and Z g to the electromagnetic coupling constant. As shown in [27], the polarization asymmetry for polarized electron scattering on an unpolarized target is given by where e m is the mass of the electron, E is the incident beam energy, F G is the Fermi coupling constant characterizing the strength of the weak interaction, α is the fine structure constant, and ( ) F θ is a function of the scattering angle in the center-of-mass frame. The weak charge of the electron, Since the value of 2 w sin θ is close to 1/4, there is an enhanced sensitivity of A to small changes in the Weinberg angle. The value of w θ varies as a function of the four-momentum transfer, q, at which it is measured. This variation, or "running", is a key prediction of the Standard Model. The one-loop electroweak radiative corrections to Born A (calculated once the renormalized parameters in (7) are properly defined) reduce its Born value by ~40%. This effect can be attributed to an increase of ( ) 2 2 w sin q θ by 3% as the four-momentum transfer "runs" from 2 2 Z q m = to 2 0 q ≈ [28]. The precision with which 2 w sin θ can be measured in the MOLLER experiment [29] (where a 11 GeV longitudinally polarized electron beam scatters on atomic electrons in a liquid hydrogen target) is shown in Figure 5.2 of [15].
Many of the electroweak meaurements obtained so far may be combined to provide a global test of consistency with the SM. Since the Higgs-boson mass affects the values of electroweak observables through radiative corrections, it is of fundamental importance to test the agreement between the directly measured value of H m and that inferred from the measurements of electroweak parameters W m , top m and 2 w sin θ (see Figure 5.2 in [15]). High-precision electroweak measurements, therefore, represent a natural complement to direct studies of the Higgs sector.
Apart from providing a comprehensive test of the SM, precision measurements of weak neutral current interactions at 2 2 Z q m  also allow indirect access to new physics phenomena beyond the TeV energy scale. For instance, such measurements can be used to look for hypothetical Z' bosons, 4-fermion contact interactions, or very weekly coupled low-mass "dark bosons" [30].

Rare Kaon Decays
CP violation was introduced in the SM by increasing the number of quark and lepton families to at least three (M. Kobayashi and T. Maskawa, 1973). This idea became very attractive with the subsequent discovery (in 1977) of the bottom quark, which forms, together with the top quark (discovered in 1995), a third family of quarks. It is a remarkable property of the Kobayashi-Maskawa model that quark mixing and CP violation are intimately related [31]. decays. The displacement of the bottomright vertex is due to the charm-quark contribution to π K νν + + → .
A deeper insight into CP violation is expected to be gained from precision measurements of rare kaon decays such as 0 0 L K π νν → and K π νν + + → .
Both decays are theoretically 'clean' because hadronic transition amplitudes are matrix elements of quark currents between mesonic states, which can be extracted from the leading semileptonic decays using isospin symmetry. Since photons do not couple to neutrinos, K πνν → decays are entirely due to second-order weak processes determined by Z-penguin and W-box diagrams [31].
The process 0 0 L K π νν → proceeds almost entirely through direct CP violation, and is completely determined by "short-distance" one-loop diagrams with top quark exchange. The Standard Model predicts its branching ratio to be [32] ( ) ( This decay is an important source of information on higher-order effects in electroweak interactions, and thus can serve as a probe of new physics (see [33] and references therein). By measuring the branching ratios of both K πνν → decay modes, the unitarity triangle of the CKM matrix can be completely determined (see Figure 5), provided the matrix element cb V and the top quark mass are known [34]. Of particular interest is the unitarity triangle parameter sin 2β , which can also be determined from the decay K is based on the seven candidate events observed by the experiment E787/E949 at Brookhaven [35]. This result is consistent with ( ) 11 7.8 0.80 10 − ± × , the value predicted by the SM [32].
As shown in Figure 11 of [33], the kaon yield rises rapidly as a function of the incident proton momentum. From the figure one infers that the minimum energy of the proton beam should be about 20 GeV, for otherwise the kaon yield would be severely reduced. At the proposed KEK facility, the total proton beam energy would be ( 2.5 20 + ) GeV. A 70 GeV proton synchrotron could be installed, at a later stage, inside the Tristan ring in order to increase the proton beam energy -albeit at the cost of a considerably lower beam power. For a given kaon yield, the required beam power would be lowest at E b = 30 -100 GeV [33].

A Novel g-2 Experiment with Ultra-Slow Muons
A charged elementary fermion has a magnetic dipole moment ( ) 2 s g q m s µ = aligned with its spin s. The proportionality constant s g is the Landé g-factor, q is the charge of the particle and m is its mass. Dirac's theory of the electron predicts that 2 s g = . For the electron (e), muon ( µ ) and tau lepton (τ ), this prediction differs from the observed value by a small fraction of a percent. The difference is the anomalous magnetic moment; the anomaly is defined by  [36].
The current experimental uncertainty on a µ is 0.54 ± ppm. In a novel g-2 experiment [37], the main aim of which is to reduce this uncertainty to 0.1 ± ppm, 3 GeV protons impinge on a graphite target and produce pions that are stopped in the target. Some of the positive pions are brought to rest near the surface of the target, where they decay into positive muons with momenta 30 MeV c p µ = and 100% spin polarization. The muons are collected using a large-aperture solenoid and transported to a silica-aerogel target in which they form muonium (electronµ + ) atoms. As the atoms slowly diffuse from the target, they are ionized by a pulsed laser to produce 50% polarized muons with very low momenta 3 . Those "ultra-slow" muons ( The highest-energy positrons, preferentially emitted parallel to the muon spin direction in the µ + rest frame, are Lorentz-boosted to become the highest-energy positrons in the lab frame. Hence, the angular distribution of those positrons has its maximum in the direction of the muon spin [17]. By measuring the energy and time distributions of positrons one can determine the average spin direction. The time spectrum will show the muon lifetime modulated by the spin precession frequency. The relative precession of the spin with respect to the direction of the particle velocity u is given by where a ω and η ω arise from a µ and d µ , respectively, and u c β ≡ .
Since the rotation axes due to a µ and d µ are orthogonal, the corresponding signals can be separated [37]. In the case of µ µ , the anomalous precession period is 2.2 µs, about 300 times the cyclotron period. Assuming that muons are 100% polarized, 12 1.5 10 × positrons have to be detected for a measurement precision of 0.1 ppm [37].

An XFEL Based on the Proposed Superconducting Linac
To record the dynamics of atoms requires a probe with Ångstrom (10 −10 m) wavelength and femtosecond temporal duration (10 −15 s). Such probes have recently become available with the advent of X-ray free-electron lasers (XFELs). The ultrashort pulse duration of an XFEL matches the timescale of non-equilibrium microscopic processes such as electron transfer in molecules, evolution of chemical reactions, vibration dynamics in solid state systems, etc. Optical lasers are also capable of producing pulses of femtosecond duration, but lack the required spatial resolution.
The spectral brightness (or brilliance), B, of a radiation field can be expressed, in practical units, as the number of photons per second passing through a given cross-sectional area and within a given solid angle and spectral bandwidth (BW).
This quantity determines how much monochromatic radiation can be focused onto a tiny spot on the target. The peak spectral brightness-the brightness measured during the very short duration of an FEL pulse-of the two presently most powerful XFEL facilities (LCLS at SLAC in the United States and SACLA at SPring-8 in Japan) is billion times higher than that of any synchrotron radiation source.
Owing to the high intensity of XFEL radiation, laser-irradiated atoms, molecules and atomic clusters can be excited into previously unknown states. Although high-intensity pulses may also destroy molecular structures, they can still be used to produce high-resolution X-ray diffraction patterns, from which real-space images of the atomic positions in molecules can be reconstructed. In a typical "pump-probe" experiment, the evolution of a chemical (or biochemical) reaction, initiated by an optical or IR laser pulse, is observed by a time-delayed X-ray pulse. By varying the delay, such stroboscopic measurements result in femtosecond "movies" of the evolving system. Note that, due to their long pulse duration, X-rays from synchrotron light sources can be used to image atomic structures only in static measurements.
Nanometre-scale molecular imaging is made possible by the high degree of coherence of the XFEL radiation. The coherence quality of a light source is best described by the degeneracy parameter D, defined as the number of emitted photons per coherent phase-space volume per coherence time: Here B is the brilliance and λ [0.1 nm] the wavelength of the source.
Free-electron lasing is achieved by a single-pass, high-gain FEL amplifier op-erating in the so-called self-amplified spontaneous emission (SASE) mode. An FEL consists of an electron linear accelerator and an undulator, a long periodic array of magnets with period u λ . The wavelength of the first harmonic of the observed FEL radiation is given by [38] [39] Thus, The wave train is not monochromatic due to its finite length. For typical val- µ . An 80 fs pulse, therefore, consists of many micropulses of 0.33 fs duration.
The 'shot noise' in an electron beam, the origin of which is the random emission of the electrons from a photocathode, causes random fluctuations of the beam density. The radiation produced by such a beam has amplitudes and phases that are random in both space and time. For this reason, SASE X-ray FELs lack longitudinal (or temporal) coherence, characterized by the coherence length In order to increase the coherence length in the hard X-ray regime (photons with 0.1 nm wavelength), a "self-seeding" method was tested at LCLS [40]. FEL pulses, generated in the first modular section of the LCLS undulator, are spectrally "purified" by a crystal filter (a diamond monochromator). Since a typical monochromator delays X-rays, the electron bunches exiting the first modular section are appropriately delayed after being diverted around the crystal by a compact magnetic chicane (see Figure 1 in [40]). The crystal selects a very narrow part of the spectrum, which is further amplified in the second undulator section where the FEL radiation reaches saturation. At LCLS, "self-seeding" generated X-ray pulses with 0.4 -0.5 eV ν ∆ = at 8 -9 keV ν = , which represents a factor of 40 -50 bandwidth reduction with respect to SASE [40].

The European XFEL as a Prototype for the Proposed X-Ray FEL
The European XFEL, currently under construction at DESY (Germany), could serve as a prototype for the proposed X-Ray FEL at KEK. The XFEL at DESY is a free-electron laser based on self-amplified spontaneous emission (SASE) in the X-ray regime. The FEL consists of a 17.5 GeV superconducting electron linear accelerator and a set of undulators that can generate both SASE FEL X-rays and incoherent radiation. A schematic layout of the European XFEL is shown in [41]. Electron bunches, each with a charge of 1 nC, are extracted from a photocathode by short ultraviolet laser pulses and then focused and accelerated inside a radio-frequency cavity ("RF gun") to an energy of 120 MeV. In order to produce 5 kA peak currents necessary for lasing, the bunches are further accelerated and longitudinally compressed down to 25 µm using two magnetic chicanes (at 0.5 and 2.0 GeV). After traversing the main linac, where their energy is increased to 17.5 GeV, the bunches are sent through a number of FEL undulators.
A superconducting linac may accelerate 10 "bunch trains" per second, each train consisting of up to 2700 electron bunches. This results in 27,000 ultrashort X-ray flashes per second. The higher the number of electron bunches, the more scientific instruments can be operated simultaneously. The European XFEL facility will generate ultra-short pulses ( 100 fs ≤ ) of spatially and temporally coherent X-rays with wavelengths in the range ~0.1 -5 nm; its peak brilliance is expected to be The coherent superposition of the radiation fields from all microbunches within an electron bunch in the linac is responsible for the nearly monochromatic spectrum and small divergence of the radiation emitted in the forward direction [38]. Recall also that "self-seeding" can substantially improve longitudinal (temporal) coherence of SASE XFEL radiation. Thus, the radiation from an X-ray FEL has a narrow bandwidth, is transversely and longitudinally coherent, and is fully polarized. The coherently emitted XFEL spectral lines appear in addition to the spontaneously emitted undulator spectrum that extends into the MeV energy region (see, e.g., [42]).
The micropulses that form an FEL pulse give rise to "spikes" in the spectrum.
The amplitudes of the micropulses vary greatly as a consequence of the amplified stochastic variations in the electron density. Within a micropulse, the radiation is both transversely and longitudinally coherent. The duration of a micropulse is roughly coh t , the coherence time. In the SASE 1 and SASE 2 undulators at the European XFEL, coh 0.2 -0.38 fs t = [41]. The number of "spikes" in a pulse is given by the ratio of the bunch length to the coherence length: The spectrum of undulator radiation is sharply peaked around odd harmonics 4 (see, e.g., [42] [44]. Therefore, X-ray FELs can be used not only to probe the structure of matter down to the size of an atom, but also to take 'snapshots' of the motion of atoms and molecules [45].

Summary
The main "bottleneck" limiting the beam power in circular machines is caused by space charge effects that produce beam instabilities. Such a "bottleneck" exists at the J-PARC proton synchrotron complex, and is also intrinsic to the "proton drivers" envisaged at CERN and Fermilab. In order to maximally increase the beam power of a "proton driver", it is proposed to build a facility consisting of a low-energy injector linac (PI) and a high-energy pulsed superconducting L-band linac (SCL). The 2.5 GeV PI, based on the European Spallation Source (ESS) linac, would serve both as an injector to the SCL and a source of proton beams that could be used to copiously produce muons and "cold" neutrons. Protons 4 The occurrence of higher harmonics is explained in [38]. In the forward region (θ = 0) of a planar undulator, only the odd higher harmonics are observed, while the off-axis radiation contains also the even harmonics. For the European XFEL, simulations predict that the relative contribution to the total radiation power of the 3 rd and the 5 th harmonic is about 1% and 0.03%, respectively [41]. An SCL-based X-ray free-electron laser (XFEL) and a synchrotron light source for applications in materials science and medicine are also envisaged. The ultrashort pulse duration of an XFEL matches the timescale of non-equilibrium microscopic processes, allowing the dynamics of atoms and molecules to be recorded in the form of femtosecond "movies".

Introduction
Cavitation is ubiquitous in liquid, and happens when the local pressure is below the saturated vapor pressure. As cavitation bubble collapse near a solid wall, the associated phenomena include instant high pressure, high velocity jets and high temperature, which closely relate with the cavitation erosion of the solid material surface. On the other side, the collapse of the cavitation bubble has been applied in environmental protection, ultrasonic therapy, lab on chip and material surface cleaning [1] [2].
The mechanism of the cavitation bubble collapse near solid wall is a fundamental issue for the above applications. However, as too many phenomena are involved, theoretical model of cavitation bubble collapse is difficult to be established, and for complex geometry boundary, the analytical solution is even impossible. Many publications are devoted to the investigation of the bubble collapse near the planar wall starting from the experimental work [3]. In recent years, many works have been developed to investigate the interaction between collapsing bubble and non-planar solid wall [1] [4], and numerical methods are becoming more and more important tools to study the bubble collapse near solid wall [5] [6]. Commonly used numerical methods include the finite volume method (FVM) [4], the finite element method (FEM) [7], and the boundary element method (BEM) [8]. These macroscopic numerical modeling methods based on solving partial differential equations are limited in processing the multiphase flows and complex geometry boundary conditions. As describing the multiphase flow, macroscopic methods need the assistance of the schemes of the interface tracking or capturing, which will reduce the computational efficiency. For the complex geometry boundary, it is difficult and inefficient to implement by macroscopic methods.
Owing to the flexibility for complex geometry boundary and the simplicity of the algorithm, the lattice Boltzmann method (LBM) has been developed into a powerful tool for the flow, heat and mass transfer simulations relating with complex geometry boundary [9] [10]. In LBM community, many multiphase models have been presented, which can be generally classified as the color-gradient model [11], the pseudopotential model (or Shan-Chen model) [12] [13], the free-energy model [14] and the phase-field model. The pseudopotential model is widely and successfully used in the LBM multiphase community due to its conceptual simplicity and computation efficiency [15] [16]. In the pseudopotential model, the fluid interactions are mimicked by an interparticle potential, from which a non-monotomic equation of state (EOS) can be obtained. As a result, the separation of fluid phases or components can be achieved automatically in this method, and the methods to track or capture the interfaces are not required.
In recent years, the pseudopotential model, as the top choice of the multiphase LB model, was introduced into the issue of cavitation by Sukop and Or [17].
Subsequently, several research efforts emerged to investigate the mechanism of cavitation. Chen [18] simulated the cavitation bubble growth using the modified pseudopotential LB model with the exact difference method (EDM) force scheme. The results in quiescent flows agree fairly well with the solution of Rayleigh-Plesset equation. Mishra [19] introduced a model of cavitation based on the pseudopotential LB model that allows for coupling between the hydrodynamics of a collapsing cavity and supported solute chemical species. Unfortunately, the above researches do not involve the interaction between bubble and solid wall. Until, the pseudopotential model was introduced into the researches on the mechanism investigations of cavitation bubble collapse near solid wall by Shan et al. [20] [21]. However, these previous works involve just planar wall. In engineering cases, the rough walls are ubiquitous. The mechanism investigation of cavitation bubble collapse near rough wall is the basic issue for the efficient applications of cavitation bubble collapse.
One of the key challenges of the simulation of cavitation bubble collapse near rough wall is the stability of the LB model. For pseudopotential multiphase model, the stability is closely related to the thermodynamic consistency [22], and great efforts have been made for this issue [23] [24] [25]. Recently, Li et al. [24] [25] [26] founded that there exists a suitable forcing scheme, which can meet the thermodynamic consistency requirement in an efficient way. In order to investigate the interaction mechanism between the collapsing bubble and complex geometry boundary, in the present work, the improved forcing scheme for the pseudopotential MRT LB model developed by Li et al. [25] is adopted to achieve the sufficient density ratio and stability of the numerical model.

Numerical Method
The pseudopotential LB model, also well known as Shan-Chen model, was developed by Shan and Chen in 1993 [12]. In pseudopotential model, the fluid interactions are mimicked by an interparticle potential, which is now widely called pseudopotential. In original pseudopotential LB model, the Single-Relaxation-Time (SRT) collision operator was employed. In recent years, the MRT collision operator has been verified that it is superior to the SRT operator in terms of numerical stability. The MRT LB evolution equation can be given as follows: where f α is the density distribution function, eq f α is its equilibrium distribution, t is the time, x is the spatial position, e is the discrete velocity along the th α direction, t δ is the time step, α ′ F is the forcing term in the velocity space, and M is an orthogonal transformation matrix. Λ in Equation (1) is a diagonal matrix, and for D2Q9 lattice, Λ is given by ( )  (1)) can be rewritten as [25] ( ) where I is the unit tensor, and S is the forcing term in the moment space .  (5) where ( ) , x y F F = F for two dimensional space is the force action on the fluid system. Then the streaming step of the MRT-LB equation can be formulated as For the pseudopotential LB model in D2Q9 lattice case, the F in Equation where ( ) x ψ is the interparticle potential, G is the interaction strength, and where EOS p is the pressure calculated by equation of state (EOS). The G in Equation (8) loses the meaning of the interaction strength and is used to ensure that the whole term inside the square root is positive [27]. The Carnahan-Starling (CS) EOS is adopted in the present work, which can be given by [27] ( ) ( ) ( ) Here c T and c p are the critical temperature and pressure, respectively.
F in Equation (7) can be incorporated in evolution equation via S with specific forcing scheme. Li et al. [25] proposed a MRT version forcing scheme to achieve thermodynamic consistency. For the D2Q9 lattice, Li's forcing scheme can be given by where ε is a parameter related with the particle interaction range [28] and is proved by Li et al. to have the function of adjusting the thermodynamic consis-tency [25]. Li's forcing scheme has the following advantages: a) maintaining a uniform layout with the a general form of the LB forcing scheme; b) achieving thermodynamic consistency only by tuning one constant parameter; and c) fully retaining the LBM's advantages of simple and efficient. In [21] it is found that the thermodynamic consistency is independent of kinematic viscosity for Li's improved forcing scheme, and the surface tension is independent of the relaxation time v τ . These features make it more convenient to investigate the physical mechanism of the multiphase flows.
Unless otherwise specified, the unit adopted in this paper is the lattice unit of LBM, i.e. the units of the length, time, mass, velocity, density and pressure are lu(lattice unit), ts(time step), mu(mass unit), mu lu −2 , lu ts −1 and mu ts −2 .

Cavitation Bubble Collapse near Rough Solid Wall
It is confirmed that the geometry is a crucial role on the process of the cavitation bubble collapse [29]. The interaction between collapse bubble and rough solid wall is a common issue abstracted from the cavitation applications involving the interactions between bubble and complex geometry boundary. In the present work, the rough solid wall is described by the wall with the geometries of periodic grooves with equal widths as shown in Figure 1. In this diagram, g D and g W are the depth and the width of groove, respectively. b W is the width of bulge, and P g b is the period width of the periodically arranged geometry.
The computational domain is shown in Figure 2. R 0 , v P and P ∞ are the initial radius, the pressure inside bubble and the ambient pressure, respectively. g d and b d are the distance from bubble center to groove bottom and bulge top, respectively. Based on the concept of the stand-off parameter [30], two non-dimensional parameters are introduced as follows to describe the positional relations between bubble and rough solid wall  The pressure boundary conditions, implemented by the nonequilibrium extrapolation scheme [31], are adopted in the directions of left and right. And, a constant pressure boundary condition is adopted in the top of computational domain by Zou-He scheme [32].
A 401 401 × lattice is adopted in the simulations of this section. A vapor bubble with radius of 80 R = is initially placed at the center of the domain. The density field is initialized as [33] ( ) ( ) ( ) where ( ) For CS EOS, 0.5 a = , In order to simulate the bubble collapse process, a positive pressure difference v P P P ∞ ∆ = − is achieved by artificially tuning the initial liquid density based on the equilibrium state. In this section, the pressure difference is 0.0116 P ∆ = .

Evolution of Density Field
The evolution of density field is shown in Figure 3. The deformation of bubble profile can be investigated from density field. The initial spherical bubble begins  the similar dynamics process [20]. The differences are mainly reflected in two aspects. The first one is the bounce of the shock wave in rough solid wall case is weaker than the one in the planar wall case. The strength and the reflection path of the shock wave are affected by the geometry of the rough solid wall. The second one is the geometry of the rough solid wall affects the bubble deformation, especially at the last stage. In order to be more intuitive, pressure field evolution of the collapsing bubble is investigated in the next section.

Evolution of Pressure and Velocity Field
Several representative moments of the pressure and velocity evolution are shown in

Analysis of Eroding Effect
In order to investigate the eroding process and effect induced by bubble collapse close to the rough solid wall, four test sections are set at the bulges top and the grooves side walls as shown in Figure 5.  Figure 6 and Figure 7.
Form Figure 6, we can find that the test section a experiences three pressure peaks during the time interval of observation. The closer to the groove entrance, the greater the pressure amplitude is. The first pressure peak with the greatest amplitude is the result of shock wave. The second is the water hummer effect of jet. However, as the test section is parallel with jet direction, the pressure amplitude is weaker than the first pressure peak. The third pressure peak is induced by the second collapse. The pressure evolution at the test section b displays similar pattern. Due to the impeding effect of the bulge, the lower pressure area is induced near the test section b. The test section c is just under the collapse position, so the three pressure peaks are greater than the test section a and b. The pressure pattern of the test section d is similar as that of b on account of the larger deviation   In summary, the eroding effect at the solid wall of the bulge part is weaker than that at the side walls, and is more obvious at the side walls of the grooves than other sections. From the pressure and velocity evolution patterns, it demonstrates that the fluid velocity accounts for the eroding effect at the side walls of the grooves, and the pressure accounts for the eroding effect at the bulge part of the rough solid wall.

Conclusions
For the modeling of the collapsing bubble near rough solid wall, an improved MRT pseudopotential LB model was adopted with the modified forcing scheme by Li et al. Then the bubble collapse near rough solid wall was simulated. The bubble collapse mechanism was investigated from the dynamic process including bubble profiles evolution, pressure and velocity field evolution. The eroding effects of shock wave and jet were analyzed in details. It is found that the process and the effect of bubble collapse are affect seriously by the geometry of solid boundary. In the present work, the interaction between the collapsing bubble and the geometries boundary structured with periodic grooves was investigated.
We found that the fluid velocity is the major cause for the eroding effect at the side walls of the grooves, and the pressure is the major cause for the eroding effect at the bulge part of the rough solid wall. All four skills are either receptive or transmittal skills.
Reading is an active and diligent process. To establish a successful and productive communication between a reader and a writer, the prerequisite is the clarity of thoughts and correct usage of the words in the text. The relevance of writing is unattainable unless the reader understands and comprehends the text in its actual context. Reading involves the understanding of words, and their meanings to have an accurate interpretation of the text. When reader reads a text he or she should be able to connect the words with the given situation for better understanding.
Readability is defined as the ease with which a written text can be understood by a reader. The readability of a particular text depends on both: The complexity of its vocabulary and syntax.
To measure readability, various computer-based formulas have been proposed that include only two factors [1]- [13]: 1) the number of syllables or (letters) in a word and 2) the number of words in a sentence. The results generated by these various formulas are not accurate and not always in a good agreement.
In the present paper, we will examine the reading speed V RS and reading comprehension text shown as a black text color on a plain background for nonnative English speaking undergraduate students of our institution. Different presentations of text with different font sizes were assigned to the students. The obtained results are compared with those of similar investigations. Correlation between performance goals such as success rate, time to complete tasks versus students will be presented.

Methodology, Experimental Result & Discussion
In order to check the readability a planned survey was conducted in two phases.
In the first phase, a drafted text was chosen with a few direct questions to evaluate the students' reading speed. The difficulty level of the text was checked thoroughly using readability software [1]- [11] before it was given to the students. The students were asked to read the text carefully for a few minutes and write the answer in the space provided after each paragraph. The time they took to read the text was noted by the instructor. The purpose of this investigation was just to see whether the students had understood the text by considering the time it took to complete the tasks. In the second phase, after monitoring the reading speed, a reading comprehension was given to the students. An elaborated and structured text having variety of questions as multiple choice questions was given to the students. The purpose of this survey was to check the understanding of the students. The total time taken by the students to complete the text was also noted by the instructor.
The sample subjects of this study are 70 students both male and female, 60% male and 40% female. In term of language proficiency, we used a placement test with average level 9 and 14, calculated using the readability software indicated in Table 1 [1]- [11]. The selected subjects were employed and non-employed students  Table 1 and can be calculated using Fry graph software Figure 1. Scores calculated by readability software are different. The scores obtained by the students usually range between 0 and 100. A higher score indicates easier readability.
Score readability in the range between [90 -100] is too easily understood by an average 11 old year student, while score in the range between [60 -70] can be easily understood by 13 -15 old year student. Score in the range between [0 -30] is best understood by university student. Based on the USA education system, a grade level is equivalent to the number of years of education a person has had.
Scores over 22 should generally be taken to mean graduate level text.
The reading speed text contained 516 words. The subjects were allowed to do the test after they confirmed that they had understood about how to answer the questions. They were also told to be a little fast because of the time constraint.
Presenting text in darker colors and larger size with more pixel aids helped students in performing better. Black text on a plain background has been found to yield faster reading than black text on a medium textured background [14], up to 32 percent faster than reading light text on a dark background [14]. It seems that larger text with more pixel draws more attention than smaller ones.
In this investigation, for rapid reading and understanding we present black text on white page with high contrast backgrounds. The font size is either 12 or 14 points.
Long sentences using unnecessary words are often used to express more than one idea in a sentence. Research indicates that brief and simple sentences are easily and readily understood than long sentences. Sentences over 20 words in length cause a loss in reading comprehension [15]. In addition to adequate contrast between text and its background, it is also recommended that the number of sentences in a paragraph should not exceed six as indicated in Table 1 which is provided in this investigation.
The comprehension reading text contained 295 words. The test was supposed to be consistent with the information items, to be tested with intermediate  Table 1 and can be calculated using Raygor graph software       Values of R range from −1 to +1. A correlation coefficient of 0 means that there is no relationship. A value of −1 is a perfect negative coefficient and a correlation value of +1 indicates a perfect positive correlation. Another value of use in correlation analysis is the coefficient of determination which is represented as R 2 , and varies between 0 and 1.
In Figure 3 and Figure 4, after collecting the data, the sample for V RS and students' score are sorted in ascending order. Then, the sorted data of V RS and score (y-axis) versus the corresponding students (x-axis) are displayed as a scatter plot. The x-axis coordinates x 1 corresponds to the first student's name point where x n corresponds to the n th student's name point.
The variation in V RS using font size 12 associated with the student scores versus student names are illustrated in Figure 3 and Figure   For reading comprehension under different size font 12 and 14, the relationship between the students and their scores is illustrated in Figure 7 and Figure 8 for the group 1 and 2, respectively. This correlation can be described by linear relationship indicated by the equations. 4.1 and 4.2 for group 1 and 2, respectively. Note that the x-axis is the name of the student and not the number of the student, and the graph was obtained after sorting the data from the lowest to the highest score. In this investigation, the subjects were told that there was no time limitation and they could do this part at their own pace, but they were not supposed to take a lot of time. The value of the coefficient of determination under this condition is larger than 0.84, which indicates very strong correlation.
For the same group, the score obtained under font 12 and 14 can also be de- The survey indicated that more than 90% of the students do not read. In comparison with the native speaker, it seems that non English speaking students do not spend more time in reading. This could be explained by many factors such as, difficulty in reading, lack of motivation, environment, etc.
The coefficient of determination gives an indication of the contribution of the factor being studied in the regression analysis to the relationship between reading speed, score and student. In the case of reading speed data ( Figure 3 and Reading skill, once developed, can be most easily maintained at a high level by the students. If the students have poor receptive skills then they will find reading monotonous. As a result they will never interpret the text in the right context.
They will face multiple problems to comprehend and understand English especially in their academic courses where English Language is most commonly used. Due to lack of student's proficiency in the language, to bridge the language gap, it requires a lot of collaborative efforts and coordination from the student and the teacher to overcome the difficulties in the process of learning. Higher complexity requires more learning and results in less efficient human performance.
In addition to the effect of the environment, the difference of reading speed and performance may be explained by individual differences and basic skills. We all differ in learning abilities and in task completion [16].
Knowledge, experience, environment, and familiarity will help to remember and increase the readability speed. For example, it seems that most native English-speaking people remember English words easily than non-native speaker.
Reading in one's native language is easier than reading something in a foreign language. A non-native speaker who uses English or learns English as a second language faces many difficulties in the process of learning English. Reading doesn't come easily. To enjoy reading and achieve proficiency, interest should be created in reading independently without facing any difficulties. Reading should be developed as a habit so that the reader enjoys what he reads.
Learning is the process of encoding in long-term memory information that is contained in short-term memory. It is a complex process that requires consistent efforts. Learning process is improved through repetition and deep analysis, only if the information being transferred from short-term memory, has structure and is meaningful and familiar. Based on above learning processes, it can be ascertained that high readability requires high skill. In case of sample students due to lack of adequate reading, their process of encoding is likely to be reduced which results in less efficient performance indicated by the slow reading speed and low score due to the low degree of familiarity and deep analysis.
The essence of skill is in the performance of actions characterized by consistency and economy of effort. Given enough time, people can improve their performance in almost any task. Usability goals versus performance in the form of measurable objectives must also be established. Performance goals such as the time it takes to complete tasks versus success rates must be defined. In performance, research indicates that a greater working memory is positively related to increase reading comprehension, reasoning skill, and learning technical information [17]. In addition, information stored within working memory is variously thought to last from 5 to 30 seconds, with estimates of working memory storage capacity size is about of 3 to 4 items [14]. Based on the above theories, it seems that low reading speed can be explained by the minimum information stored time with low capacity storage.
Lind et al., reported that there are two levels of information processing [18].
Both levels function simultaneously. The highest level used for reading and understanding, which consists of consciousness and working memory, performing reasoning and problem solving. This level is limited, slow, and sequential. In contrast to the lowest level, perceiving the physical form of information sensed, it processes familiar information rapidly. It seems that the process of both levels of our students is likely to be reduced.

Conclusion
Through our study we made an attempt to investigate the degree of readability of non-native English speakers. The study demonstrates that speed of reading, which is an indicator of assessing a non-native English speaker's readability, is lower than that of a native English speaker. More importantly, the results show that the relationship between the reading speed and score versus the reader can be described by linear regression with very strong correlation. The magnitude of extracted parameters namely the slope and y-intercept maybe used as guideline to asses and evaluate the readability. In our future work, to clarify the weak correlation obtained in the present investigation, experiments will be carried under separated gender, controlled time and different grade level.

Authors' Contribution
The co-authors (Dr. Muktisharma and Dr. Deepti Sharma) prepared the drafted text that was given to the students to carry out the survey with sharing contribution writing introduction, methodology and conclusion by all the authors. While
which describes the unidirectional propagation of waves at the free surface of shallow water under the influence of gravity [12]. In this model, ( ) , u t x represents the wave's height above a flat bottom; x is proportional to distance in the direction of propagation and t is proportional to the elapsed time. The KdV equation is completely integrable, and its solitary waves are solitons [13]. The Cauchy problem of the KdV equation has been studied by many authors [14] [15] [16] and a satisfactory local or global (in time) existence theory is now available (for example, in [15] [16]). The solution of the KdV equation is global [16]. It is also observed that the KdV equation does not accommodate wave breaking (by wave breaking we mean the phenomenon that a wave remains bounded but its slope becomes unbounded in finite time) [17].
modelling the unidirectional propagation of shallow water waves over a flat bottom. Again ( ) , u t x stands for the fluid velocity at time t in the spatial x direction and 0 c is a nonnegative parameter related to the critical shallow water speed [18]. The CH equation is derived physically by approximating directly the Hamiltonian for Euler's equations in the shallow water regime (it also appears in the context of hereditary symmetries studied by Fuchssteiner and Fokas [19]).
Recently, the alternative derivations of the CH equation as a model for water waves, respectively, as the equation for geodesic flow on the diffeomorphism group of the circle were presented in [20] and in [21]. For the physical derivation, we refer to the work in [22]. The geometric interpretation is important because it can be used to prove that the least action principle holds for the CH equation [23]. It is worth pointing out that the fundamental aspect of the CH equation, the fact that it is a completely integrable system, was shown in [24] [25] for the periodic case and in [26] [27] for the non-periodic case. Its solitary waves are smooth if 0 0 c > and peaked in the limiting case 0 0 c = [28]. They are orbitally stable and interact like solitons [29] [30] and the explicit interaction of the peaked solitons is given in [14].
Since the CH equation is structurally very rich, many physicists and mathematicians pay great attention to it. Local well-posedness for the initial datum was proved by several authors, as in [31] [32] [33] [34]. For the initial data with lower regularity, we refer to Molinet's paper [35] and also the paper [36]. Moreover, wave breaking for a large class of initial data has been established in [31] [33] [37] [38]. However, in [39], global existence of weak solutions was proved but uniqueness was obtained only under a prior assumption that is known to hold only for the initial data H R [43] [44]. In [45], the authors showed the infinite propagation speed for the CH equation in the sense that a strong solution of the Cauchy problem with compact initial profile cannot be compactly supported at any later time unless it is the zero solution, which is an improvement of the previous results in this direction obtained in [46].
Degasperis, Holm and Hone [47] proved the formal integrability of the DP Equation (1.5) by constructing a Lax pair. They also showed that DP equation has a bi-Hamiltonian structure and an infinite sequence of conserved quantities, and that it admits exact peakon solutions which are analogous to the CH peakons. Peakons for either b = 2 or b = 3 are true solitons that interact via elastic collisions under CH dynamics, or DP dynamics, respectively. Recently, Lundmark [48] showed that the DP equation has not only peaked solitons, but also shock peakons of the form The DP equation can be regarded as a model for nonlinear shallow water dynamics and its asymptotic accuracy is the same as for the CH shallow water equation [2] [3] [22]. An inverse scattering approach for computing n-peakon solutions to the DP equation was presented in [49]. Its traveling wave solutions were investigated in [50].
The Cauchy problem for the DP equation has been studied widely. Local well-posedness of this equation is established in [51] [52] for the initial data Although the DP equation is similar to the CH equation in several aspects, these two equations are truly different. One of the novel features of the DP equation different from the CH equation is that it has not only peakon solutions [47] and periodic peakon solutions [58], but also shock peakons [48] [61] and the periodic shock waves [56]. B was studied in [68]. They showed that if a weaker  [78]. In [79] [80], recently, our research team studied optimal distributed control of the Fornberg-Whitham equation and the θ-equation which involve complex nonlinear items respectively. We clarified the well-posedness of weak solution without relying on viscous coefficient, which is major improvement in comparison with our previous results. Utilizing the Dubovitskii-Milyutin functional analytical approach, we also proved the necessary optimality condition for the control systems in fixed final horizon case. Hwang studied the quadratic cost optimal control problems for the viscous DGH equation. He derived the necessary optimality conditions of optimal controls, corresponding to physically meaningful distributive observations. By making use of the second order Gateaux differentiability of solution mapping on control variables, he also proved the local uniqueness of optimal control [81].
Inspired by the papers mentioned above, in present work, we investigate the b-equation from the point of view of distributed control. More precisely, we consider the following governing equation where Bv is the external control term which is L-periodic in spatial x, is a control and B be an operator called a controller. The explicit formulation of the control problem will be provided after the investigation of well-posedness of the state equation.
We mainly consider the two following problems: • for the nonlinear control system governed by the b-equation with quadratic and whether this v * is unique?
• if one finds the unique optimal control ad v * ∈  for the above control problem, how can we characterize this optimal control?
The plan of the remaining sections can be summarized as follows. In Section 2, we study the initial-boundary problem of the b-equation with forcing function in a special space . Adopting the Faedo-Galerkin method and utilizing a uniformly prior estimate of the approximate solution, we prove the existence and uniqueness of weak solution under the definition introduced in the paper.
For general b R ∈ , the proof without relying on viscous coefficient is a major improvement in comparison with our results in [74] [75] [76] and other discussions in [77] [78] [81]. In Section 3, based on the well-posedness result, we give the formulation of the quadratic cost optimal control problem for the b-equation and investigate the existence and uniqueness of the optimal solution. In Section 4, by the method of control theory (for more detailed discussion, we refer readers to book [82]), we establish the sufficient and necessary optimality condition of an optimal control in fixed final horizon case. In order to obtain this result, we also prove the Gateaux differentiability of the state variable ( ) the sufficient and necessary optimality condition of an optimal control which is not limited to the necessary condition is another novelty in this paper. At last, in Section 5, we give the specific sufficient and necessary optimality condition of optimal control v * for two physical meaningful distributed observation cases employing the associate adjoint systems.  For convenience, we shall consider the following initial-boundary value prob-

The Existence and Uniqueness of Weak Solution
.
In order to study the weak solution of Equation (2.2), we introduce the following two special spaces firstly.
which is equipped with the norm It is easy to verify that the spaces From now on, when we speak of a solution of Equation (2.2), we shall always mean the weak solution in the sense of Definition 2.1 unless noted otherwise. We set an unbounded linear self-adjoint operator Then the set of all linearly independent is an orthonormal basis of H.
Furthermore, we can define the powers s can be found that the following expression holds , u t x satisfies the boundary conditions of Equation (2.1). Then, we get The proof of Lemma 2.1 can be referred to our article [79] [80].
is a forcing function, we can assume that The above equation implies that By the use of the Sobolev embedding theorem, we can estimate the following items as Therefore, combining the boundedness of the sequence { } H Ω with the inequality (2.6), we can derive that Similarly, multiplying both sides of the first equation in Equation (2.4) Using the Sobolev embedding theorem, inequality (2.9) and boundary conditions of Equation (2.4), we can estimate the following each item , , Combining above estimates, Equation (2.11) can be deduced into the following inequality Hence, combining estimate inequality (2.9) and (2.13), we can find that which indicate m y V ∈ . We also can have m y H ∈ from the fact of V embeds into H.
Combining estimate inequality (2.9) and (2.10), we also can know that Therefore, we deduce from inequality (2.14) that Collecting the analysis above, one has: Then, by virtue of (2.19), we can find a subsequence (denoted Since j is arbitrary and finite linear combinations of j ω is dense in H, we can satisfies Definition 2.1. Hence, from complex analysis above and Lemma 2.1, we obtain the existence of weak solution ( ) Next we will discuss the uniqueness of this weak solution.
Let 1 u and 2 u be any two weak solutions of Equation (2.1) and set ( ) Taking the inner product of both sides of the first equation in Equation (2.27) with η , we obtain , .
Combining all complex estimates above and Equation (2.28), we can deduce This completes the proof of uniqueness.

The Existence and Uniqueness of an Optimal Control
In this section, we will give the formulation of the quadratic cost optimal control problem for b-equation and investigate the existence and uniqueness of an optimal solution.
Let  be a Hilbert space of control variables, and an operator called a controller. We assume that the admissible set ad  be a bounded closed convex set, which has the non-empty interior with respect to  topology, i.e.
( ) We study the following nonlinear control system: The weak solution ( ) ; , u v t x is called the state variable of the nonlinear control system (3.1).
The observation of the state is assumed to be given by is an operator called the observer and  is a Hilbert space of the observation variables.
We shall consider the following quadratic cost functional associated with the nonlinear control system (3.1): Hence, the discussed optimal control problem is to find an element which subject to the controlled system (3.1) together with the control constraints.
Now, we shall discuss the existence and uniqueness of an optimal control v * for the cost functional (3.3), which is the content of the following theorem. Proof. Because ad ≠ ∅  is a closed convex set, there exists a minimizing se- We set Then cost functional (3.3) can be rewritten as From inequality (3.5), the right hand side of the first equation in Equation (3.7) can be estimated as a.e.. take k → ∞ . Then, by the standard arguments as in [83], we find that the limit u satisfies the following equations: Because the mapping is lower semi-continuous in the weak topology of  and ⋅  is also lower semi-continuous. The mapping , which is a contradiction unless 1 2 v v * * = . This completes the proof.
From the above analysis, we can conclude that is a unique optimal solution to the optimal control problem investigated.

The Sufficient and Necessary Optimality Condition
In this section, we shall characterize the optimal control by giving the sufficient and necessary condition for optimality. We firstly give the following lemma according to optimal control theory.
Proof. Let v * be the optimal control subject to Theorem 3.1. Then for From inequality (4.2), we can derive that Therefore, if we pass to the limit in inequality (4.3), we obtain that Alternatively, suppose inequality (4.1) remains true. Because the mapping From inequality (4.4), we deduce that If we pass the limit in inequality (4.5), we can get for ad v ∀ ∈  , which completes the proof.
Conditions of the type (4.1) are usually termed as "first order sufficient and necessary condition", in terminology of calculus of variations. In order to analyze inequality (4.1), we need to prove that the mapping Definition 4.1. The solution mapping ; , u v t x , which plays crucial in the following discussion.
and such the derivative of ( ) , is a weak solution of the following equation: In order to estimate θ  , we multiply both sides of the first equation in Equation (4.7) by 2g θ and integrate it over Ω . Then we get It follows from inequality (4.9) and the Gronwall's lemma that Then, we estimate the each item of the right hand of Equation (4.11) as follows: Applying Gronwall's lemma to inequality (4.12), which yields We can also estimate each item of the right hand of Equation (4.14) as follows: ; , d 2 3 ; , d 2 3 ; x xx By applying the Gronwall's lemma to inequality (4.15), we can get which indicates a uniformly ( ) Afterward, we will prove a uniformly ; , . This completes the proof.
The conclusion of Theorem 4.1 means that the cost ( ) Then the sufficient and necessary optimality condition (4.1) can be rewritten

The Two Cases of Distributive Observations
In this section, we will characterize the optimal control by giving the sufficient and necessary optimality condition ; , ; , ; , ; , 0, ; Firstly, we discuss the cost functional expressed by Let v * be the optimal control subject to Equation (3.1) and cost functional (5.1). Then the sufficient and necessary optimality condition (4.29) can be represented by is the weak solution of Equation (4.6). Now we will introduce the adjoint system to describe the optimality condition (5.2): Therefore, we can provide the characterization for the optimal control v * of the quadratic cost functional (5.1) as follows: Theorem 5.1. The optimal control v * of the quadratic cost functional (5.1) is characterized by the following control system, adjoint system and inequality: Proof. Taking inner product of the first equation in Equation ( Combining Equation (4.6) and Equation ( Hence, the theorem is proved.
Secondly, we discuss the cost functional expressed by Let v * be the optimal control subject to Equation (3.1) and cost functional (5.6). Then the sufficient and necessary optimality condition (4.29) is represented by is the weak solution of Equation (4.6).
Similarly, we formulate the adjoint system to describe the optimality condition (5.7): Hence, we can give the following theorem.
Theorem 5.2. The optimal control v * of the quadratic cost functional (5.7) is characterized by the following control system, adjoint system and inequality: Proof. As we did before, we multiply both sides of the first equation of Equa- Utilizing Equation which completes the proof.

Conclusions
b-equation is an important shallow water wave equation which has many practical meanings. In this paper, we aim at pursuing an in-depth study of the optimal control issue of the classical b-equation. So, we investigate firstly the local existence and uniqueness of solution to the initial-boundary problem of the b-equation with source term, and then discuss the formulation of the quadratic cost optimal control problem for the b-equation, obtain the existence and uniqueness of an optimal control, establish the sufficient and necessary optimality condition of an optimal control in fixed final horizon case. Moreover, we give the specific sufficient and necessary optimality condition for two physical meaningful distributive observation cases by employing associate adjoint systems.
Compared with other papers in similar directions, the weak solution analysis of b-equation without relying on viscous item is one technical innovation, and the sufficient and necessary optimality condition of an optimal control which is not limited to the necessary condition is another novelty. However, much work remains to be done in this direction. For example, it is an optimal control problem of the distributed parameter system governed by the nonlinear partial differential equation, to obtain the numerical solutions for the optimal control-trajectory pair is not an easy job due to the tremendous calculation and possible model difficulties. We try to finish this non-trivial work in the follow-up research by optimizing numerical algorithm and carrying out numerical simulation, which can provide a basis for application in the engineering field.

Introduction
Let Ω be a bounded domain in Let T be a positive constant. In this paper, we will consider the following linear convection-diffusion-reaction equations: where 0 1 , a a are positive constants.
Typically, these Equations (1) express the general mathematical model incorporating different types of transport phenomena in engineering and applied sciences, such as the dispersal of a pollutant through a moving viscous medium (e.g., a river or the atmosphere, [1]), currents in semiconductor devices [2], and airflow past an aerofoil (see [3], for example). When the diffusive term is smaller than the convective one, these equations are the so-called convection dominated problems (see [4]).
In many practical convection-diffusion processes, convection essentially dominates diffusion (e.g., in some financial models [5]), and although the governing differential equation is parabolic, it displays several characteristics of hyperbolic problems. When applied to these problems, standard finite element and finite difference methods usually exhibit some combination of nonphysical oscillation and excessive numerical dispersion [6] [7]. It is therefore logical to design numerical procedures that incorporate the parabolic/hyperbolic nature of these problems. One such method is the modified method of characteristics (MMOC) which was first formulated for a scalar parabolic equation by J. Douglas and T. F. Russell in [8] and then extended by Russell [9] to nonlinear coupled systems in two and three spatial dimensions. Similar schemes had been defined by Pironneau [10] for the incompressible Navier-Stokes equations and by Süli [11] and Morton, Priestley, and Süli [12] for first-order hyperbolic equations, with the latter technique being referred to as the Euler characteristic Galerkin method.
The intent of the method is to obtain accurate approximations to convectiondominated problems. Basically, in the modified method of characteristics, the time derivative and the convection term are combined as a directional derivative. In other words, the procedure involves time stepping along the characteristics, allowing us to use large, accurate time steps.
Mixed finite element method has been proven to be an effective numerical method for solving fluid problems. It has an advantage to approximate the unknown variable and its diffusive flux simultaneously. There are many research articles on this method ([13] [14] [15] [16]). An algorithm combining the mixed finite-element method and the modified method of characteristics was first applied to the miscible displacement problem in porous media by Ewing, Russell, and Wheeler [17]. Then, the scheme had been extended by Wheeler and Dawson [18] to advection-diffusion-reaction problems. Numerical results have verified that large, accurate time steps are possible, and sharp fronts have been resolved (without oscillations or numerical diffusion) by coarser grids that standard procedures can use.
Arbogast and Wheeler [19] defined a characteristics-mixed method to approximate the solution of an advection-dominated transport problem. It used a characteristic approximation that is similar to that of MMOC-Galerkin method to handle advection in time and a lowest order mixed finite element spatial approximation for diffusion term. Piecewise constants were in the space of test function, so mass is conserved element-by-element. It was proved finally that the method was optimally convergent to order 1 in time and at least suboptimally convergent to order 3/2 in space. In [20], we have considered a combined numerical approximation for incompressible miscible displacement in porous media. Standard mixed finite element was used for Darcy velocity equation and a characteristics-mixed finite element method was presented for approximating the concentration equation. Characteristic approximation was applied to handle the convection term, and a lowest order mixed finite element spatial approximation was adopted to deal with the diffusion term. Thus, the scalar unknown concentration and the diffusive flux can be approximated simultaneously. This approximation conserves mass globally. The optimal L 2 -norm error estimates were derived. Then, we extended this method to the slightly compressible miscible displacement problem in [21].
It should be pointed out that the works mentioned above which involved the characteristic method all gave one order accuracy in time increment t ∆ . That is to say, the first order characteristic method in time was analyzed. As for higher order characteristic method in time, Rui and Tabata [22] used the second order Runge-Kutta method to approximate the material derivative term for convection-diffusion problems. The scheme presented was of second order accuracy in time increment t ∆ , symmetric and unconditionally stable. Optimal error estimates were proved in the framework of L 2 -theory. Numerical analyses of convection-diffusion-reaction problems with higher order characteristic/finite elements were analyzed in [23] [24], which extended the work [22]. The The goal of this paper is to present a second order characteristic mixed finite element method in time increment to handle the material derivative term of (1).
It is organized as follows. In Section 2, we formulate an approximate scheme that combines the second order characteristic finite element method for the material derivative term and mixed finite element method for the diffusion term. In Section 3, we prove the stability of the combined approximate scheme. In Section 4, we derive the L 2 -norm error estimates for both the unknown scalar variable and its flux. The scheme is of second order accuracy in time increment, symmetric, and unconditionally stable.

Formulation of the Method
In this paper, we adopt notations and norms of usual Sobolev spaces. Moreover, we adopt some notations for the functional spaces involved, which were used in We adopted some propositions and lemmas from [23].    ∫ v v (7) and its inverse, By using the Liouville's theorem and the chain rule, we obtain

Variational Formulation
From the definition of the characteristic lines and by using the chain rule, it follows that ( ) ⋅∇ v (12) By introducing the flux σ φ = ∇ A and using (12) Before giving a week formulation of (13), we adopted a lemma from [23], which can be considered as a Green's formula. , X X X C Ω → Ω ∈ Ω be an invertible vector valued function. Let F X = ∇ and assume that ( ) 1 1 integrate in Ω respecetively, and apply the usual Green's formula and (14) with

The Combined Approximate Scheme
We now present our time-stepping procedure for (17). Let N be the number of time steps, t T N ∆ = be the time step, and n t n t = ∆ for 0,1 2,1,3 2, , n N = .
We will use the notation    n n By using (8) and (11), we see that Taking (20) A similar notation to the one in Section 2.2 is used for the Jacobian of Thus, in the case where the characteristic lines and their gradients are not explicitly known, we propose the following time approximation of (21) The time difference approximation (27) Then, the fully discrete scheme reads: Given 0 Throughout the analysis, K will denote a generic positive constant, independent of , , h h t φ σ ∆ . Similarly, ε will denote a generic small positive constant.

Stability of the Approximate Scheme
In this section, we derive the stability of the approximate scheme (29). In order to develop the stability, some assumptions on the different terms of (1) are required.
the constant c is given by For the boundary integral term 6 I , we first use some properties of the inner product in the space ( ) Then, by summing up from (33) to (38), inequality (31) follows.
By using Lemma 11 and following the arguments to Lemmas 5.6, 5.7 and

Error Estimate Theorem
Now, we turn to derive a priori error estimate in L 2 -norm for the solutions of (29). In order to state error estimates, we need two following Lagrange interpo- In order to estimate the terms on the right-hand side of (44), we adopt the following lemmas. Lemma 14. [23] Assume the above Claims 2 -5 hold, and that the coefficients of the problem (1) . Let the solution of (43) (46) where 1 c denotes a constant independent of t ∆ .
Lemma 15. [23] Assume the above Claims 2 -5 hold, and that the coefficients of the problem (1) where 1 c denotes a constant independent of t ∆ . Now, we turn to bound the third term on the right-hand side of (44).
Lemma 16. Assume the above Claims 2 -5 hold, and that the coefficients of the problem (

Introduction
Several years ago, Barry Simon investigated a new approach to inverse spectral theory for the half-line Schrödinger operator, [3].

A new A-function introduced in [1] [2] [3]
, is related to Weyl-Titchmarsh function by the following relation: α ∈ for all a.
In [3], the key discovery is that ( ) ( Thus the inverse problem to determine q from m, becomes a problem to solve the integro-differential Equation (2). Properties of (2) are discussed in [4] [5] [6] [7]. To construct numerical solvers to this integro-differential equation, one needs to study sets of exact analytic solutions.
In this paper, we study a larger class of analytic solutions of (2), which is of Substituting (4) in (2), we find that , and j γ satisfy the nonlinear equations: Then we give a method for solving (5) explicitly in Section 3. The idea is to introduce new variables j c , the symmetric functions of j γ ( 1, , ), that is This nonlinear system turns out to be solvable. Calogero proved that a certain family of n-body problems is solvable in a 2004 J. Math. Phys. paper and his model includes system (6), and the method we use in this method is different from his approach. Our method also shows some insightful connection to scattering problems. In Section 3, first we find n constants of motion for the system (6) which allow us to reduce it to a first order nonlinear system. We note that j γ is zeros of polynomials with coefficients j c . Calogero pointed out in [8] [9] that some nonlinear systems can be linearized by nonlinear mapping between coefficients of polynomial and its zero, and thus is integrable. The novelty in this paper is that the nonlinear mapping from j γ to j c relates the system (5) to a solvable yet still nonlinear system. Interestingly, a system similar to (6)  Section 4 shows how we obtain analytic examples of (2) by following this systematic procedure.

The γ Equation
As described in introduction, we relate a large class of exact solution of A-Equation (2) to a second order non-linear system (5).
Without loss generality, we assume i j γ γ ≠ for all i j ≠ . Then the following proposition can be followed by direct calculation.
is invertible.
Proof. Suppose the matrix is not invertible; then there exists a non-zero vector ( ) our assumption, and proves the given matrix must be invertible.

Non-Linear Integrable Equation
To solve Equation (5) explicitly, we construct a nonlinear mapping from γ to new dependent variables c. We take j c to be the j-th symmetric function of For convenience, we define 0 1, 0 And for 1 j n = + , we have Proof. As in the previous calculations, for every 1 1 j n < ≤ + , we have By assumption, ,

Second Order Nonlinear System to First Order Nonlinear System
We have identified n constants of motion for the system (14). This will allow us to reduce the second order system to a first order system. Proposition 6. The nonlinear system (14) has the following constants of motion: ( ) for all the 0 j n ≤ ≤ . Here 0 j c = , when j n > or 0 j < .
Proof. Since Multiplying the first of these equations by j c and the second equation by  Theorem 1 presents the equivalence of the first order system (23) and the second order system (14).
Proof. of Theorem 1 (i) this result follows directly from Proposition 6.

Method of Integrating Factor
We have reduced the second order non-linear system (14) to the first order non-linear system (23). To solve the latter system explicitly, we begin by writing it in matrix form. Let We will show that the nonlinear system (28) can be solved explicitly.
The nonlinear system (28) can be written as where ( ) ( ) Our goal is to find an integrating factor M such that after multiplication on the left by M, (30) and moreover ( ) To find M, we now rewrite (34) and (36) in matrix form, Thus each of the n rows of M solves an over-determined linear system, consisting of 3 n + equations and 1 n + unknowns.
Studying the structure of the matrix ( ) The left side of this equation vanishes, since the condition in the hypothesis can be represented as ( ) ( ) where 2 j κ are arbitrarily distinct and non-zero constants.

Lemma 8 also shows that (39) is solvable only if
The following propostion proves that N satisfies both 0 NC = and 0 N C ′′ = .
Proposition 10. If (28) is satisfied and , , The system (28) is integrable and equivalent to this linear system.
Remark 14. This proves that (28) is integrable and provides a procedure to obtain explicit solutions from the linear system (51).

Exact Analytic Examples
We will illustrate the procedure of section 2 and section 3 by a simple exact analytic example in this section.
We explicitly discuss the case, where 2 n = in (4). Then

( )
, We construct the non-linear mapping from γ to c , After multiplying (54) by M on the left, the first order system is solved explicitly. Indeed, j c satisfy linear system (51). Let Then (51) takes the form Here the values of the constants are:

Conclusion
A large class of exact Equations to A-Equation was found in this work. Techniques used in our approach include non-linear transformation between coefficient of a polynomial and its zero, constants of motion, and an interesting integrating factor method. The nonlinear system studied here is of interest not only for its connection to inverse problems. It represents a larger category of integrable system than C-integrable system and is worth further investigation.

Introduction
The mathematical modelling and the numerical simulations have become important tools for better understanding of the human cardiovascular system in recent years. One of the main goals in investigating the flow in the aortic system is to understand arteriosclerosis and the related phenomena as well as their dependence on a blood flow structure. In particular, the aorta and arteries have a low resistance to blood flow compared with the arterioles and capillaries. When the ventricle contracts, a volume of blood is rapidly ejected into the arterial vessels. Since the outflow to the arteriole is relatively slow because of their high re-sistance to flow, the arteries are inflated to accommodate the extra blood volume.
During diastole, the elastic recoil of the arteries forces the blood out into the arterioles. Thus, the elastic properties of the arteries help to convert the pulsatile flow of blood from the heart into a more continuous flow through the rest of the circulation. Hemodynamics is a term used to describe the mechanisms that affect the dynamics of blood circulation [1] [2] [3]. In reality, an accurate model of blood flow in the arteries would include the following realistic features: 1) the flow is pulsatile, with a time history containing major frequency components up to the eighth harmonic of the heart period; 2) the arteries are elastic and tapered tubes; 3) the geometry of the arteries is complex and includes tapered, curved, and branching tubes and 4) in small arteries, the viscosity depends upon vessel radius and shear rate [4]. Such a complex model has never been accomplished.
But each of the features above has been "isolated," and qualitative if not quantitative models have been derived. As is so often the case in the engineering analysis of a complex system, the model derived is a function of the major phenomena one wishes to illustrate.
Our goal is to model and examine the general trend of possible solutions associated with the governing equations describing a simple one-dimensional blood flow that would depict a blood progressing within a thin and elastic pulsating artery. In reality, for many flow situations, the changes of density due to changes in pressure associated with the flow are very small but not zero. In our simulations, the density is assumed variable for the following reason: hereafter we treat blood not as a homogeneous fluid but a suspension of particles (red cells, white cells, platelets) in fluid called plasma. Blood particles must be taken into account in the rheological model in smaller arterioles and capillaries since their size becomes comparable to that of the vessel [5] [6] [7] [8]. In particular, as has been discussed in [9], red blood cells (RBCs) exhibit a unique deformability, which enables them to change shape reversibly in response to an external force. Human RBCs have the ability to undergo large deformations when subjected to external stresses, which allows them to pass through capillaries that are narrower than the diameter of a resting RBC. In fact, RBCs are more deformable than any other biomaterial. RBCs are biconcave discs, typically 6 -8 μm in diameter and 2 μm thick, and their deformation can involve a change in cell curvature, a uniaxial deformation, or an area expansion. In mammals, RBCs are non-nucleated and consist of a concentrated hemoglobin solution enveloped by a highly flexible membrane. The deformability of RBCs plays an important role in their main function, the transport of gases (O 2 and CO 2 ) via blood circulation (see also [10] and [9]).
To put the deformability of blood due to pressure in perspective, consider a multi-component system of total volume V, with where i V is the subvolume of component i in the system. The (isothermal) compressibility of the system is But the compressibility of each component is Therefore, (2) reduces to Finally, denoting the volume fraction of each component of the system by i α , In our case, the main contribution to the compressibility and deformability of the blood is coming from RBCs.
The method of obtaining general solution of the governing equations for the given model is considered from two prospective points of view. The first approach represents a singular perturbation theory providing explicit approximate solutions to the model and the second one is based on group theoretical modeling that provides the exact solutions written in an implicit form. The main goal is to compare these two approaches and fetch out the efficiency and deficiency of each proposed approach.
The range of developed models or models being developed extends from lumped models to complicated three-dimensional fluid-structure models [11] and [12]. In this article we consider a simple one-dimensional model of blood flow in a vessel. The blood flow in the vessel is described by this and generally by all one-dimensional models is not suitable for describing blood flow in complicated morphological regions as stenosis or bifurcations. However, these situations can also be covered to certain extent and, from one hand, can be used as an alternative to the more complex three-dimensional fluid-structure models or in conjunction with them in a geometrical multiscale fashion, as explained in [13].
On the other hand computational complexity of one-dimensional models is several orders of magnitude lower than that of multi-dimensional models. Few decades ago, a multi scale approach has attracted wider interest. Namely, in a multiscale approach, one-dimensional models may be coupled on the one hand with lumped-parameter models [12] based on a system of ordinary differential equations [11] [14], or to three-dimensional fluid-structure models, as discussed in [15] and [16]. In the latter case they may also provide a way of implementing more realistic boundary conditions for 3D calculations; or, they can be used for the numerical acceleration of a three-dimensional Navier-Stokes solver in a multilevel-multiscale scheme. Additionally, one-dimensional models give a good description of the propagation of pressure waves in arteries [17] [18], hence they can be successfully used to investigate the effects on pulse waves of the geometrical and mechanical arterial modification, due e.g. to the presence of stenoses, or to the placement of stents or prostheses [13].

Modeling
In order to describe a problem in mathematical terms, one must make use of the basic laws that govern the elements of the problem. Within the frame of the present modeling, we start with conservation laws for mass and momentum and consider a perfect compressible fluid propagating along a tube with longitudinal coordinate x and slowly varying cross-section ( ) , a x t . Because of the pressure gradient in the blood, the artery wall must deform. The elastic restoring force in the wall makes it possible for waves to propagate. In terms of one-dimensional modeling, we assume that the artery radius ( ) Conservation of mass requires We next assume that the time rate of momentum change in the volume is balanced by the net influx of momentum through the two ends and the pressure force acting on all sides. The rate of momentum change M is given by is the density of fluid associated with the mixture density of the blood consisting of blood plasma and red blood cells (RBC). In reality, RBC fraction may include large viscosity variations, stressing the importance of accounting for the non-Newtonian effects (see e.g. [19], where the Quemada viscosity model [20] is used to account for the non-Newtonian viscosity behavior).
The net rate of momentum influx is The net pressure force at the two ends is given by The sum of all pressure forces P is given by Balancing the momentum by equating M given by (7) to the sum of (8) and P given by (10) we get, after making use of mass conservation (6), Arterial pulse propagation varies along the circulatory system as a result of the complex geometry and nonuniform structure of the arteries. In order to learn the basic facts of arterial pulse characteristics, we assume an idealized case of an infinitely long circular elastic tube that contains a slightly compressible blood, which is a suspension of particles in what's basically water. As such, it's compressibility will be mainly due to the RBC, as explained above. We can think of it as a two-phase homogeneous nonviscous fluid flow of water and gas bubbles. If we apply pressure to the water/gas mixture the overall density will decrease as the gas compresses, leading to the mixture continuity equation that, under the assumption of zero relative velocity, reduces the equivalent single phase flow of density ρ [8]: In addition, empirical constitutive laws are needed to relate pressure and density such as equations of state where T is the temperature and γ is the ratio of specific heats at constant pressure and constant volume. Furthermore, since pressure is a function of density only, we can write where λ is a constant. From this equation it follows that Since the force due to gravity is neglected, combining (11), (12) and (17), we arrive at the governing equations of motion for unknowns velocity ( ) , v x t , pressure ( ) , p x t and density ( ) that are written as follows: , p λρ = (20) in which t is time, x is a spatial variable and λ is a constant.
We eliminate the pressure from these equations by differentiating the Equation (18) with respect to t and using the equation of state (20) to get Using Equation (19), we can rewrite (21) as As follows from Equations (18) and (20), we have Substituting this result into (22), we arrive at the following single equation for

First Approach: Approximate Analysis
In order to identify the resonant input in the model, we start with a an approximate solution in the form of naive expansion where ε is small parameter and 0 cont v = is an exact trivial solution of Equation (24).

Failure of the Direct Approach
We substitute the expansion (25) into (24) and collect powers of ε .
In view of presentation (27), the right hand side in Equation (30) is written as where we denote ( ) We look for particular solution in the form Substituting this solution into (31), we obtain the following expressions for H and R: As expected, because of the dispersion relation (28), the right hand side of Equation (30) corresponds to resonance and yields the secular terms.
In particular, if we look for particular solution of the form the resonance input would be removed since, in this case, the constants H and R would be ( ) and so the particular solution would have the form In particular, Figure 1 shows As seen from (37), the first terms of the expansion (25) provide a local (small t) approximation, at most. The shortcoming of (25) is related to the breakdown of the straightforward approach on nonlinear perturbation analysis of equation (24), but is more transparent to explanations. The nonlinear terms in (24) will slowly, but accumulative, absorb energy and damp the motion. Hence, even

Multiple Scale Approach
We introduce the latter scale according to the new variable We now consider the fast scale t, and the slow scale τ , as independent variables.
We rewrite Equation (24) in terms of the new variable (38) and modify the series expansion (25) to the form which yields the perturbation hierarchy similar to (26) and (30), i.e. Problem ( ) 1 0 ε : where F represents the right hand side if Equation (30) and appears because of scaling (38). Since the derivative with respect to the fast variable appears only at ( ) 2 0 ε , the problem at ( ) 1 0 ε is identical to Equation (26). The slow time variable τ is implicit in the constant of integration and the most general real-valued solution of the ( ) 1 0 ε problem can be written as With this solution in hand, Equation (40) To avoid secular terms we must require ( ) ( ) The natural choice is to set 1 0. A * = Then, squaring and adding the resulting equations for 1 A , we arrive at a single equation Solving Equation (46), we write the solution in the form ( ) ( ) where 1 c is a constant of integration and 1 ω is related to 1 k and λ by the dispersion relation (28), i.e.
( ) For the purpose of visualization, Figure 2 is used to compare the qualitative  Figure 1.
A question of particular interest is the investigation of asymptotic stability of the trivial solution 0 v . This will be done in the next section.

Stability of Perturbed Steady Flows
We note that the stationary solution solves Equation (24) in the stationary case, i.e.
which can be integrated to give the exact solution of the form Since, as Figure 2 shows, ( ) U x is growing linearly in x, we classify ( ) U x as non-physical solution.
Let us now look for a nonstationary solution of Equation (24) where ε is a small parameter and v denotes the perturbation. This procedure is largely formal. Mathematics ideal requires proof that the solution of the complete equations in question for 0 ε → has a solution of the approximate equations at zeroth order of ε (at least asymptotically). In fact, this ideal is achieved in very rare cases; researchers are usually limited to the formal construction of an approximate model. The justification is based on physical intuition which opens a wide scope. It is clear that, at the same time, the role of the criterion of practice is greatly increased.
We assume a perturbation of the form of a plane harmonic wave propagating in the positive x direction, where A is a constant amplitude, k is a wave number and ω is the angular frequency of the oscillations. Substituting the presentations (53) and (54) into Equation (24) and collecting the terms of the order 0 ε , we get the nonlinear equation for the mean flow which coincides with (49) and ( ) For progressing wave like solution, Equations (56) and (57) provided that ω and k satisfy the dispersion relation (28), i.e.
( ) with two known wave modes given by (29). As one can expect, since the flow is away of frictional boundaries, the dispersion relation (28) confirms asymptotic stability of the mean constant flow (58).

Second Approach: Group Theoretical Point of View
Detailed presentations of the theory of symmetries and invariant solutions of differential equations can be found elsewhere [23] [24] [25] [26]. For convenience, we summarize the basic notation from calculus of Lie group analysis in Appendix, which represents a simplified version of the overview of basic concepts of Lie symmetry groups.
a a t t x x = = The above transformations groups have the following generators (called also infinitesimal symmetries):

Traveling Waves
Traveling waves are invariant solutions with respect to any linear combination of the translation generators 1 X and 2 X . We take the linear combination in the form The operator (61) has two independent invariants, v and The representation (62) reduces Equation (24) to the ordinary differential eq- We integrate it once and obtain ( ) The second integration gives the cubic equation for determining ( ) f µ . Thus, the traveling wave solution (62) is determined by the cubic Equation (63) and involves three arbitrary parameters

Similarity Solution
The invariant solution with respect to the generator of the uniform scaling transformation group is known as a similarity solution.
The operator (64) has two independent invariants, v and The invariant solution has the representation Calculations show that the representation (66) reduces Equation (24) to the second-order ordinary differential equation

Rewriting this equation in the form
and integrating once, we arrive at the first-order equation We transform Equation (68) Separating the variables, Evaluating the integral in (71) we obtain the following cases.    (65), (66), (70) and (72)-(75).

Conclusion
We have investigated a nonlinear partial differential equation of second order that could be used to model a simple one-dimensional blood flow inside a tube of varying cross-section. This model can be an approximation for a pulsating elastic artery. We have proposed two different points of view. The first approach represents a singular perturbation theory that formalizes the scale-separation property by explicitly defining multiple scales that exist in the given nonlinear model with the goal of separating derivatives with respect to fast and slow scales into different orders of perturbation theory. The advantage of this approach is that it yields a solvable perturbative hierarchy of equations that provides useful perturbative information already at low orders in ε . However, the disadvantage of this approach is the need to identify the various scales a priori and, in the frame of the present modeling, multiple scale approach cannot be brought beyond the leading order. Alternatively, group theoretical approach provides all possible exact solutions of the nonlinear model (24) without any perturbations and, consequently, without introducing a small parameter ε , which is a significant advantage. In this article, we have provided the exact solutions that were obtained implicitly by solving the nonlinear ordinary differential equations, which have the deficiency of the latter approach. However, in terms of numerical simulations, the second approach seems more advantageous.
is known as the generator of the group G.
Hence any one-parameter group has exactly 1 N − functionally independent invariants (basis of invariants). One can take them to be the left-hand sides of 1 N − first integrals , . X X X X X X = − Due to this property, L is called a Lie algebra. If the dimension of the vector space L is equal to r, the space is denoted by r L and is called an r-dimensional Lie algebra. An r-dimensional Lie algebra r L generates a group depending on r parameters which is called an r-parameter group. Submit or recommend next manuscript to SCIRP and we will provide best service for you: terprise's credit, collateral is basically carried out by the bank or the bank commissioned by third parties. Therefore, the main mode of supply chain finance in China is banking-oriented model. At the same time, as the most powerful leading enterprise in the supply chain, the core enterprise can help to improve the whole supply chain with the advantage of information and credit. The credit guarantee financing model, providing credit guarantee for the downstream retailers, gradually appeared.
About supply chain decision-making and coordination, most scholars study the wholesale price contract, risk sharing, revenue sharing, buy back contract, rebate contract and other contracts based on two echelon supply chain, retailermanufacturer or supplier-manufacturer.
In the supply chain, only a small number of scholars consider the impact of financial constraints on the supply chain decisions. The research about the decision-making of three echelon supply chain, retailer-manufacturer-bank, is also less. In this paper, the credit guarantee contract is integrated into the supply chain finance model, to study the decision-making of supply chain finance.

Literature Review
In recent years, many scholars have used the basic newsboy model and the Nina et al. [5] design a partial credit guarantee contract for SCF, incorporating the bank credit financing and manufacturer's trade credit guarantee, to analyze its equilibrium financing strategies. Yan & Sun [6] study the optimal strategy of the capital-constrained retailer in SCF through warehouse receipt pledge under the uncertain demand environment. Considering the credit limit and the ruin probability of the retailer, the optimal financing interest rate, the retailer's optimal order decision and the optimal wholesale price of the manufacturer are analyzed. Lu et al. [7] establish a multi-stage supply chain decision model including a supplier, a downstream manufacturer and a financial institution according to accounts receivable financing model of supply chain, and study the decision-making problem of the firms with and without financing. Yi & Zhou [8] consider a two echelon supply chain consisting of a single supplier and a single retailer. The paper analyses the bank's loan to value ratios when the retailer pledging his order contract in a newsboy supply chain. Li & Lou [9] establish a one-to-one supply chain dynamic game model with a capital-constrained supplier, and analyze the effectiveness to the improvement of the efficiency of supply chain by retailer's advance payment. Lin Chen et al. [10] study the pricing and effort decisions of a supply chain with single manufacturer and single retailer. The results imply that the uncertainty degree of sales effort elasticity has an outstanding influence on the pricing and effort decisions, whereas the uncertainty degree of price elasticity has a modest impact on these decisions. Z. Liu et al. [11] find that the retailer's optimal order quantity is determined by the inverse distribution of the external demand and the confidence level considering a contract-design problem for two competing heterogeneous suppliers working with a common retailer. H. Chen et al. [12] consider the optimal selling problem of a supplier who sells the same product to two competing retailers under two types of uncertainty-the selling costs of retailers and external demand and the results demonstrate that higher risk levels correlate with lower belief-degree costs of the two retailers and higher belief-degree sizes of the market.
On the basis of the scholars' research, the credit guarantee contract is integrated into the supply chain finance model. This paper innovatively set up the credit guarantee coefficient and retailer's loan coefficient and studied the influence of coefficients on decision-making of supply chain finance.

Model Description
Supply chain financing is generally short, this paper establishes a single order cycle of supply chain finance model including a capital-constrained retailer, a core manufacturers and a bank. Then we formulate a Stackelberg game model in which the bank acts as a leader.

Variable Definitions and Parameters
In order to describe the analysis model easily, the relevant variables and parameters are defined as follows.

F x
Let , , r q w R respectively be the decision variables and * * * , , r q w R respectively be the retailer's optimal order quantity, manufacturer's optimal whole-sale price, and the bank's optimal interest rate.

Model Assumptions
In order to describe the analysis model easily, make the following assumptions.
1) The supply chain finance system considers only one order cycle, consisting of a retailer, a manufacturer and a bank.
2) It is assumed that the distribution function F(x) is a continuous function which is consistent with the increasing rate of failure distribution (IGFR).
3) It is assumed that retailer only sells a single product and the manufacturer can fully meet the demand of the retailer and the bank can meet the retailer's loan.

4)
Assuming that the retailer's financing volume is proportional to the order quantity and let the loan coefficient be parameter a, where B aq = .
5) Assume that the product has no residual value and ( ) 1 r p a R ≥ + .

6)
In the supply chain finance system, the participants are risk neutral, and the goal is to maximize the expected profit.

The Model Framework
A single cycle supply chain financial system consisting of a retailer, a manufacturer and a bank is established. The capital-constrained retailer needs to apply for a loan. The manufacturer provides a certain credit guarantee for the retailer's loan, who will pay a certain proportion of the remaining loan when the retailer can't repay. Bank is the leader of supply chain finance system.
Supply chain finance process are assumed by Figure 1.
1) First of all, as the leader of the supply chain financial system, the bank gives an appropriate loan interest rate r R 2) Secondly, the sub-leader decides a wholesale price w when the bank has given the interest rate.
3) Then, the retailer makes a decision on the order quantity q after the bank gave interest rate r R and the manufacturer decided wholesale price w. At the same time, the retailer apply to the bank for a loan B aq = because of capital constraints.

4)
At the end of the period, the retailer repay the loan according to sales. If the retailer has the ability to repay the loan, the retailer bears all the principal Figure 1. Supply chain finance process.

The Supply Chain Finance System's Profits
Proposition 1 When the random demand x is greater than the minimum realized demand 1 x , the retailer can pay the principal and interest by itself at the end of the period, where ( ) 1 1 r x aq R p = + . And 1 x q ≤ is also obtained.
Proof: At the end of the period, the retailer need to repay the principal and interest i.e.

The Supply Chain Finance System's Profits in Decentralized System
At the beginning of the period, the manufacturer gives the wholesale price w, and the retailer orders quantity q. Due to financial constraints, the retailer applies for a loan B aq = from bank and its own funds are wq aq = .
The retailer's expected profit function can be expressed as in Equation (2 2) The manufacturer's profit function can be expressed as in Equation (3) ( The manufacturer's expected profit function can be expressed as in Equation 3) The bank's profit function can be expressed as in Equation (5) ( The bank's expected profit function can be expressed as in Equation (6) ( )

The Supply Chain Finance System's Profit in Centralized System
The supply chain finance system's profit function can be expressed as in Equation (7) , The supply chain finance system's expected profit function can be expressed as in Equation (8) ( )

The Supply Chain Finance System's Optimal Decisions
We solve the model via backward induction to determine the optimal decisions of the supply chain finance system.

The Supply Chain Finance System's Decisions in Decentralized System
Proposition 2 Given the unit retail price p, the manufacturer's wholesale price w, unit manufacturing cost c, retailer's loan coefficient a, the bank's endogenous interest rate r R and the risk-free interest rate f R , for IGFR distributions of demand, the capital-constrained retailer's unique and optimal order quantity that satisfies Proof: From Equation (2), taking the first-order and second-order derivative of R π with respect to q, it follows that If the distribution of demand is IGFR,  Given the unit retail price p, unit manufacturing cost c, retailer's loan coefficient a, the bank's endogenous interest rate r R , the risk-free interest rate f R , the credit guarantee coefficient λ and the optimal order quantity ( ) ( ) ( ) for IGFR distributions of demand, the manufacturer's unique and optimal wholesale price that satisfies ( ) ( ) Proof: The wholesale price has a definite closed interval that

The Supply Chain Finance System's Decision in Centralized System
price c = 6, the risk-free interest rate 0.03 Given the retailer's loan coefficient 6 a = and credit guarantee coefficient Figure 2 describes the change of retailer's optimal order quantity with the loan interest rate r R in decentralized and centralized system. Figure 3 describes the change of SCF participants and system's optimal profit with the loan interest rate r R in decentralized and centralized system.
(1) From Proposition 5, the retailer's optimal order quantity is only related to , , , f p c a R in centralized system. Therefore, the retailer's optimal order quantity and the optimal profit of the supply chain finance system do not change with the loan interest rate r R .
(2) After the game analysis, the retailer's optimal order quantity decreases with the increase of the loan interest rate r R in the decentralized system. On the other hand, the bank's optimal profit increases with the increase of loan interest rate r R , and the growth rate slows down with the increase of loan interest rate r R . The retailer, manufacturer and SCF system's optimal profit decrease with the increase of the loan interest rate. In order to encourage retailer and manufacturer, the bank should choose the appropriate interest rate r R . Figure 2. Retailer's optimal order quantity underdifferent loan interest rate. Figure 3. SCF participants and system's optimal profit under different loan interest rate.
(3) The bank's optimal profit is higher than the retailer's one after the bank's interest rate reaches about 0.07, and the bank's optimal profit is higher than the manufacturer's one after the bank's interest rate reaches about 0.14.
(4) According to Figure 3, the optimal profit of the supply chain financial system in centralized system is always higher than the sum of the three SCF participants' optimal profit in the decentralized system. The higher r R is, the larger optimal profit gap between the decentralized and centralized system is.
Given the bank's endogenous interest rate r R and credit guarantee coefficient 0.9 λ = , Figure 4 describes the change of retailer's optimal order quantity with the retailer's loan coefficient a in decentralized and centralized system.   1) In centralized system, the retailer's optimal order quantity and the optimal profit of the supply chain finance system decrease with the increase of the loan coefficient a.
2) In decentralized system, the retailer's optimal order quantity decreases with the increase of a, and then increases at a faster rate. The bank's optimal profit increases with the increase of a. And with the increase of a, the retailer's optimal profit firstly declines at a faster rate, then decreases slowly, and finally decreases sharply. The manufacturer's optimal profit decreases with the increase of a, and then increases at a faster rate. The variation tendency of the three SCF participants' total optimal profit is approximately the same as that of the manufacturer.
3) The bank's optimal profit and the retailer's optimal profit are equal at about 6.5 of a. Figure 5, the optimal profit of the supply chain financial system in centralized system is always higher than the sum of the three SCF participants' optimal profit in the decentralized system too. When the the loan amount is the same as the order cost (aq = wq), that is, the retailer's own capital is zero, the optimal profit of the SCF system in centralized system is equal to the sum of the three SCF participants' optimal profit in the decentralized system. Appropriately increasing retailer's loan coefficient can narrow the optimal profit gap between the decentralized and centralized system.

4) From
Given the bank's endogenous interest rate R r = 0.06 and the retailer's loan coefficient a = 6, Figure 6 describes the change of retailer's optimal order quantity with the credit guarantee coefficient λ in decentralized and centralized system. Figure 7 describes the change of SCF participants and system's optimal profit with the credit guarantee coefficient λ in decentralized and centralized system. 1) In centralized system, the retailer's optimal order quantity and the optimal profit of the supply chain finance system do not change with the credit guarantee coefficient λ.  2) In decentralized system, the optimal order quantity of the retailer decreases with the increase of λ. The bank's optimal profit increases with the increase of λ at a faster rate. The optimal profit of retailer and manufacturer decreases with the increase of λ, whose variation tendency is approximately the same. The sum of the three SCF participants' optimal profit in the decentralized system also decreases with the increase of λ.
3) When λ is below about 0.6, the bank's best profit is negative. The bank's optimal profit is positive only when λ is large enough. Figure 7, the optimal profit of the supply chain financial system in centralized system is always higher than the sum of the three SCF participants' optimal profit in the decentralized system. The higher λ is, the larger optimal profit gap between the decentralized and centralized system is.

Conclusions
This paper studies the decision-making problem of three echelon supply chain, retailer-manufacturer-bank, in centralized and decentralized systems. Through the calculation, it is proved that under the credit guarantee of the core enterprise, the retailer has the optimal ordering strategy, and the core enterprise has the optimal wholesale price. Besides, the optimal profit of the SCF system in centralized system is always higher than the sum of the three SCF participants' optimal profit in the decentralized system.
The retailer's loan coefficient and the credit guarantee coefficient can narrow the optimal profit gap between the decentralized and centralized system to a certain extent. It is embodied in two aspects. On the one hand, when the retailer's loan coefficient is consistent with the wholesale price, that the retailer's loan amount is 0, the optimal profit of the SCF system in centralized system is the same as the sum of the three SCF participants' optimal profit in the decentralized system. Considering that the principal and interest of the retailer are the retailer's total sales when the supply chain financial system is balanced, i.e. the retailer's profit is zero. Hence, the retailer's loan coefficient can narrow the optimal profit gap between the decentralized and centralized system to a certain extent. On the other hand, the smaller λ is, the narrower optimal profit gap between the decentralized and centralized system is.
Considering that the bank's expect profit is negative when the credit guarantee coefficient is too small, the credit guarantee coefficient can narrow the optimal profit gap between the decentralized and centralized system to a certain extent.
There are also shortcomings in this paper. In this paper, the decisions of parameter a and λ are not taken into account in the game model, but the influences on the game result are analyzed. On the other hand, this paper considers a simpler supply chain model. The actual situation is that supply chain finance financing is facing a more complex supply chain system, and involves dynamic evolution.

Introduction
During the past several years, the study of coupled nonlinear evolution Equations has played an important role in explaining many interesting phenomena, like electromagnetic wave propagation in impurity media, water waves, pulse in biological chains and so on [1] [2] [3]. At the same time, the coupled integrable dispersionless system (CIDE) has attracted much interest in view of its wide range of application in various fields of mathematics, physics, applied mathematics, theory of quantum and theory of conformal maps on the complex plane [4] [5] [6] [7]. The CIDE, has first been presented by Konno where Equation (1) describes the current-fed within an external magnetic field [20]. In Equation (1), q, r, and s are all functions of x and t, the subscripts denote partial derivatives with respect to the space-like and time-like variables respectively.
The aim of this work is to verify if the congestion, due to the displacement of a great number of soliton will modify the conservation properties observed for the case of two solitons. Indeed, we provide the explicit expression of the N-Rotating loop soliton solution to the CIDE for the general positive integer 2 N ≥ and to illustrate our general result, we discuss particular cases of N. Thus the following paper is organized as follows. In section 2, we summarize the transformation of the CIDE Equation (1)

Hirota's Bilinearization of the CIDE
Let us consider the following setting [20] [21] [22] , , r X iY s X iY which inserted into Equation (1) It is then possible to obtain at the required order the required number of soliton solutions by determining the full expansion of F and Q.

Rotating one and Two-Loop Soliton Solution
In this section, we derive the rotating solitons i.e., solutions that the Z component of the angular momentum is a conserved quantity. In order to construct one-rotating soliton solution, we take ( ) Absorbing the parameter  into the phase constant 1 γ gives the one-rotating soliton solution of the CIDE as it is depicted in Figure 1.
Next, we choose the solution of Equation (7) According to the above analysis, the two-rotating soliton solution is obtained when we substitute Equations (11)-(13) into Equation (5) as it is depicted in where the real parts and imaginary parts of the parameters n k and n ω are obtained using the dispersion relation as ( ) here, n v and n Ω are the phase velocity and the angular velocity of the soliton, which respect the following condition 0, 1 ;1 .
Now, let us consider two simple cases: 3 N = and 4 N = .

Summary and Discussion
In this work, we have investigated the CIDE under the view-point of Hirota's bilinearization. Investigating its one-and two-soliton solution, we have come to propose a generalization of such solution to explicit N-soliton solution of the same system. As a matter of illustration, we have provided explicit expressions of 3-and 4-soliton solutions to the CIDE, and have provided figures to enforce our results. In this figures it has appeared clearly that the solution exhibit particle character, since they interact elastically. Since the CIDE is of many physical implications, the N-soliton solution we have obtained is helpful in understanding the propagation of waves in some media such as the propagation electric field in optical fibers, since in Ref. [22] has provided the relation that link the CIDE and the short pulse system. In this work, we have not gone deeply in studying the interaction process between solitons. Such a study will help understand better the interaction process that occurs during the propagation of such waves in some media including optical fibers.   [3] and [5]. The aim of the paper is now to derive a vibration model for a special kind of rotating machine, where the rotor is linked to the stator by roller bearings, bearing housings and end-shields and where the stator feet are mounted on a soft foundation, so that the centre of gravity of the stator is displaced by the height h from the foundation (Figure 1).
A soft foundation may be realized by e.g. rubber elements, where the machine is mounted, or by a steel frame foundation, because steel frame foundations are often very flexible, because of economically reasons. Therefore, in the model not only the rotor, the bearings and the support of the bearings are considered, but also the mass and inertia of the stator at its centre of gravity, and the foundation under the machine feet.

Vibration Model
The vibration model is a simplified model, which describes the movement in the yz-plane ( Figure 2). The model is generally based on the model in [9], but modified especially for rotating machines with roller bearings instead of sleeve bearings. The model covers a wide range of rotating machines, and not only electrical machines. Therefore no electromagnetism is here considered, contrarily to [9], where electromagnetic field damping is in the focus. However, the most important difference to [9] is that in this paper here not forced vibrations are analyzed but self-exciting vibrations due to instability, caused by internal (rotating) damping of the rotor shaft.
The vibrations system consists of two main masses, the rotor mass w m , which is concentrated as a lumped mass in the middle between the two bearings, and the stator mass s m , which is concentrated in the centre of gravity S of the stator with the mass inertia s θ .  Beside these two main masses, two additionally masses are considered, the mass of the shaft journal v m and the mass of the bearing housing b m , mostly to avoid zeros at the main diagonal of the mass matrix. The rotor has the rotor stiffness c and the internal damping i d and rotates with the rotary angular frequency Ω. The rotor is connected to the end-shields by bearing housings and roller bearings, which suppose to be equal for each machine side. Many methods and strategies are described in literature to derive the stiffness of roller bearings, e.g. [10]- [20]. In this paper, a simplified bearing model is used, where the stiffness of the roller bearings is described by the roller bearing stiffness matrix r C , with the vertical bearing stiffness c rz and horizontal bearing stiffness ry c . Cross coupling coefficients of the roller bearings are neglected as well as damping of the roller bearings. The stiffness and damping of the bearing housing and endshields is described by the bearing housing and end-shield stiffness and damping matrix b C and b D , which also suppose to be equal for each machine side. The stator structure is here assumed to be very stiff, compared to the foundation stiffness, so the stator structure can be modeled rigid. The stator feet -F L (left side) and F R (right side) -are connected to the ground by the foundation stiffness and damping matrix Bearing house and end shield stiffness and damping matrix: . .
The internal material damping of the rotor i d can be described by the stiffness of the rotor c and mechanical loss factor tan i δ of the rotor, depending on the whirling angular frequency F ω , referring to [3]: The same approach is deduced for the damping coefficients of the bearing housing end end-shield and of the foundation: If the ration is larger as the chosen value Δ , a loop has to be run through till the ratio is less than Δ .

Numerical Example
Based on the mathematical derivation, a numerical example is shown, where the threshold of stability is analyzed.

Boundary Conditions
The rotating machine consists of a rotor, roller bearings, bearing housings, end-shields and a stator (Figure 1), which is mounted on a welded steel frame foundation. The data of the rotating machine, roller bearings and foundation is shown in Table 1.

Analysis of Natural Vibrations and Threshold of Stability
In Figure 5 the real part and the imaginary part of the eigenvalues are presented, depending on the rotor speed.
It can be shown, that at a rotor speed of about 26130 rpm the real part 3 α becomes zero and therefore the threshold of stability is reached. The corresponding natural angular frequency is 3 394.3 rad s ω = , which is equal to the whirling angular frequency F stab ω ω = at the limit of stability of the critical mode, which is here mode 3. Increasing the rotor speed above 26130 rpm, leads to instability of the vibration system. zero. When increasing the rotor speed furthermore, this real part 3 α gets positive. Therefore mode 3 is the critical mode shape.

Variation of Single Parameters
Now different cases are investigated, and the threshold of stability stab n is calculated as well as the natural angular frequency stab ω at the threshold of stability (Table 2).

Arbitrarily Variation of Foundation Stiffness
In this section, the influence of the foundation stiffness on the threshold of stability stab n and on the whirling angular frequency stab ω is analyzed.
Therefore, the foundation stiffness is variated from the rated values in Table 1 with factors between 0.2 and 5, which means, that the foundation stiffness is variated in a range between

Arbitrarily Variation of Bearing Stiffness for the Soft Foundation
Now, the influence of bearing stiffness is analyzed for the rated soft foundation (Table 1). Therefore, the bearing stiffness is variated from the rated values in Table 1 by ±50%, which means that the bearing stiffness is variated in a range   (Table 1).

Arbitrarily Variation of Bearing Housing and End-Shield Stiffness for the Soft Foundation
In this section, the influence of bearing housing and end-shield stiffness is analyzed, for the rated soft foundation (Table 1). Therefore, the bearing housing and end-shield stiffness is variated from the rated values in Table 1 also by ±50%, which means that the bearing housing and end-shields stiffness is variated in a range between 8 2 3.5 10 kg s × and

Arbitrarily Variation of Bearing Stiffness for a Rigid Foundation
Here, the influence of the bearing stiffness is analyzed again, but now for a rigid foundation ( fz fy c c = → ∞ ). Therefore, the bearing stiffness is again variated in a range between 8 2 1.0 10 kg s × and 8 2 3.0 10 kg s × (Figure 10).

Arbitrarily Variation of Bearing Housing and End-Shield Stiffness for a Rigid Foundation
In this section, the influence of bearing housing and end-shield stiffness is analyzed again, but for a rigid soft foundation ( fz fy c c = → ∞ ). Therefore, the Figure 10. Influence of the bearing stiffness on (a) the limit of stability n stab and on (b) the whirling angular frequency ω stab , for a rigid foundation.

Discussions of the Results
In section 4.4 -4.8 (Figures 7-11) the influence of the foundation stiffness, the bearing stiffness and the bearing housing and end-shield stiffness on the threshold of stability stab n and on the whirling angular frequency stab ω is analyzed. Figure 10 shows, that for a rigid foundation, the threshold of stability can be increased clearly, if orthotropic bearing stiffness ( rz ry c c ≠ ) exists, which is also Figure 11. Influence of the bearing housing and end-shield stiffness on (a) the limit of stability n stab and on (b) the whirling angular frequency ω stab , for a rigid foundation. ), which can be seen in Figure 11. Here the stiffness is also variated in the range of ±50%, but only leading to a maximum threshold of stability of about 9900 rpm. The reason is, that both stiffness, bearing stiffness and bearing housing and end-shield stiffness are connected in series, and the rated bearing stiffness is much lower than the rated bearing housing and end-shield stiffness ( The innovation of the paper is now, that it can be demonstrated (Figure 7), that the threshold of stability can also be increased by a soft foundation, even if the foundation stiffness is isotropic ( fz fy c c = ). The reason is the kind of rotating machine, with a stator, mounted with its feet on a soft foundation, so that the centre of gravity of the stator is displaced by the height h from the foundation ( Figure 1). This leads to different mode shapes (Figure 6), which cause a similar effect on the threshold of stability as orthotropic bearing stiffness or orthotropic bearing housing and end-shield stiffness, for a rigid foundation.
In this example, the threshold of stability could be increased even to maximum of about 143000 rpm, at a foundation stiffness of  (Figure 7). Increasing the foundation stiffness furthermore in the considered range, leads to a decrease of the threshold of stability, which can be seen in Figure 7. If the foundation stiffness would be increased to infinite, the threshold of stability would drop to 3840 rpm ( Table 2; case d). But it has to be considered here, that a boundary condition of the model is, that the stiffness of the stator structure is much higher than the foundation stiffness, so that the stator structure is assumed to be rigid. As a rough estimation: Up to a foundation stiffness of about Most of the calculated thresholds of stability are fare above the limit of the roller bearings and fare above the limit, what the rotor structure would stand. Additionally it has to be noticed, that with increasing rotor speed, higher bending modes of the rotor become more and more important and therefore also the gyroscopic effect. But of course, this analysis helps to estimate, whether within the rotor speed limits of the roller bearing and of the rotor structure an instability would occur or not, if higher bending modes of the rotor and gyroscopic effects can be neglected.

Conclusion
The paper presents a mathematical model especially for analyzing the threshold of stability for a special kind of rotating machines, consisting of a rotor, stator, end-shields, bearing housings and roller bearings, mounted on a soft foundation, so that the centre of gravity of the stator is displaced by the height h from the foundation (Figure 1). After the mathematical coherences of the model have been described, a procedure was presented for deriving the threshold of stability.
Additionally, a numerical example was shown, where the threshold of stability was calculated for different boundary conditions. The influence of the stiffness of the foundation, of the bearings and of the bearing housings and end-shields was demonstrated, as well as the influence of the damping of the foundation and the damping of the bearing housings and end-shields on the threshold of stability. The main task and the innovation of the paper are to demonstrate that for this kind of rotating machines, the stiffness of the soft foundation-even if the foundation stiffness is isotropic-can help stabilizing the vibration system and therefore leading to a similar effect as orthotropic bearing stiffness or orthotropic bearing housing and end-shield stiffness for a rigid foundation. Of course, the presented model is a simplified model of the system, but the conclusions and the procedure for deriving the threshold of stability can also be applied in a finite element analysis. As a future work, experimental validation of the presented theory may be deduced, based e.g. on a small induction motor, to demonstrate the stabilization influence of the foundation.
lisions. The hadron-hadron (hh) observables like charged particle multiplicity " ch n " and pseudorapidity density " d d ch n η " are essential key to characterize the properties of matter created in proton-proton (pp) collisions [1] [2], where, η (the pseudorapidity) ln tan 2 θ   = −     and θ are the polar angle with the beam axis. The dependence of these observables on collision energy (center-of-mass energy " s ") and the collision geometry are a key tool to understand the underlying particle production mechanism [3] [4] [5]. The investigation of these observables has been used to improve, or reject, models of particle production which are often available as Monte Carlo event generators [6] [7] [8] [9]. The charged particles multiplicity is the simplest observables to understanding of multi-particle production in collisions of hadrons at high-energy [6] [7] [8] [9].
The charged particle multiplicity distributions ( ) ch P n provide an indispensable tool in the investigation of the dynamics of multi-particle production processes. Their measurements form an important part of the "hh" collision experimental activity. Some new experimental information on the multi-particle production has been reported in the recent past [6]. Consequently, a lot of efforts have also been put forward to analyze and/or organize the experimental data by using various theoretical as well as phenomenological schemes [5]- [12].
There are several models (empirical models and deterministic models) attempt to describe the multiplicity distributions [5] [6] [7]. The first step towards a successful understanding of the multiplicity distributions was done by Feynman in 1969 [3]. Moving to higher energies, deviations from those first models were observed, and the ( ) ch P n data were described using single Negative Binomial Distributions (NBD) [4], which successfully describe ( ) ch P n in full phase up to s = 540 GeV as in different η -intervals [4]. But there is a deviation of ( ) ch P n from NBD for large η -intervals for s = 900 GeV, but Giovannini and Ugoccioni [1] [2] describe the measured ( ) ch P n ) for s = 900 GeV by the combination of the two weighted NBD [4]. However, this issue is still an open question of interest from the point of view of theoretical and experimental physicists.
To alleviate this problem, we have developed a black-box modeling methodology based on applying the artificial neural network (ANN) approach [13].
ANN Black-box models are powerful and promising tools for complex system modeling. Utilization of an ANN model is, in general, highly suitable for simulating the nonlinear behavior of charged particles multiplicity distributions in proton-proton interactions. This is due to the formulations of neural network models being based on nonlinear functions and having a flexible mathematical structure. In recent years, there has been an increasing amount of applications of ANN models in the field of high energy physics (HEP) [14]- [19].
The most commonly used Neural Network model is the Back-propagation (BP) Network [13], which is a multi-layer feed-forward network trained according to error back propagation algorithm. BP network can be used to learn and discover a mathematical equation that mapping the relation of input-output model [13]. The disadvantage with multi-layer feed-forward networks using error back propagation is that the best number of hidden layers and units varies from task to task and so must be determined manually through trials and errors.
One approach to automatically determine a good size for a network is to start with a minimal network and then add hidden units and connections as required like the Cascade Correlation Neural Networks (CCNN) [20]- [25]. CCNN have several advantages over the ANN, such as they are self organized (i.e. built automatically), less computation cost and complexity (can be obtained with little adjusting parameters) and the training is very fast.
The objective of this paper is to develop a mathematical model based on CCNN approach to calculate and predict the charged particle multiplicity dis-

CCNN Black-Box Model for
( ) ch P n and ch n Developing a mathematical model that can accurately describe the physical behavior of the complex physical problem is a challenging task. Meanwhile, neural networks are a very promising tool for empirical "black-box" modeling of complex systems without going into mathematical details. An artificial neural network (ANN) is a nonlinear empirical model that inspired on the biological neural networks [13]. ANN Black-box models do not need detailed prior knowledge of the structure and different interactions that exist between important variables of the nonlinear system that under investigation. Therefore, ANN is a powerful and promising tool for complex system modeling. ANN can be trained with the Cascade-Correlation (CC) learning method to "learn" complex dynamic behaviors of physical systems. A CCNN acts as a black box and learns to predict the value of specific output variables given sufficient input information. The cascade correlation neural network is capable of global function approximation, i.e. it represents a function in a whole data set [20]- [25].
In this paper, we explore the use of CCNN for developing mathematical blackbox modeling from experimental data and PP PP collisions. In the following subsection, we will give a brief introduction to the CCNN approach.

Overview on CCNN Approach
Artificial neural networks (ANNs) are classified as intelligent computing systems because of their ability to learn. All Artificial neural networks were inspired by the human brain. ANNs consist of artificial neurons connected with each other, and they are termed as nodes. Each neuron has group of inputs, outputs and a transfer function. The mathematical model of a neuron can be described by the equation where y is the output value, x k is the k th input, w k is the weight of the connection related to the k th input and f is the transfer function which is usually the radial basis function or the sigmoid function [24] [25]. It's known that a feed forward neural network (FFANN) with one hidden layer is an universal function approximator, so it can approximate any nonlinear function with arbitrary precision. Furthermore, any FFANN can be trained (in the supervised way) by the BP algorithm. The BP algorithm calculates the gradient of the network according to the synaptic weights [13].
The main problem in ANN is the designing of the network with the appropriate number of hidden layers and their units to learn a given concept. If a network has too few hidden units, it will not have the computational power to learn the concept well. Given too many hidden units it will over-fit the training dataset and generalize poorly to new examples that not included in the training data. The CC approach which constructs neural network from bottom to top was proposed by "Fahlman and Lebiere, 1990" [24] in order to solve the problem of low convergence speed of traditional BP, the local minima problem, the step-size problem, the moving target program on and to avoid having to define the number of hidden nodes in advance.
The cascade-correlation architecture supports a variety of learning algorithms, One of the most robust back-propagation variant, called "Quick prop", was published by Fahlman (1998) [25].
At first the learning algorithm begins with a minimal network (input/output units without hidden unit). The output layer weight was adjusted by the gradient descent algorithm. The error of the network was measure, if the network's performance was not satisfactory, generate and train a candidate unit.
This candidate neuron is trained by maximizing the magnitude of the correlation between the candidate's output and the error term to be minimized. Gradient descent is used to minimize the network's output error, while a gradient ascent is employed to maximize the correlation.
By maximizing the correlation C between the candidate's output and the network output. Once a neuron is finally added to the network (activated), its input connections become frozen and do not change anymore. Train the network (input/output/hidden unites) until the residual error of the network is minimized (minimize the overall error of the net). This process of optimizing the output weights, creating a hidden neuron, optimizing the hidden neuron weights, connecting it to the output neurons, and adjusting the output neuron weights is repeated until an acceptably small error is produced or a maximum number of nodes are reached. The following lines summarize the main steps of the CCNN algorithm.
The Cascade Correlation algorithm cycles through two phases an output phase in which weights entering units are trained in order to reduce network error, and an input phase in which weights entering candidate recruits are trained in order to correlate with network error [23] [24] [25]. The connection weights should be adjusted in the two phases to maximize the correlation and minimize the network error: In the first phase: • Initialize the CCNN network (2 layers) • Calculate the actual output In the second phase: • Add candidate.
• Calculate its output.
• Train candidate to maximize C (by gradient method QuickProp) by "+ve gradient ascent" • Calculate the correlation between the candidate unite and the residual error of the network.
where, S is the derivative of the function being optimized (E in the case of the output phase should be minimized, C for the input phase should be maximized) Weight change computed by: • The first phase is started again to train the main net output.
These two phases are repeated until either the training pattern has been learned to a predefined level of acceptance or a preset maximum number of hidden units have been added, whichever occurs first. For more details see Ref. [

Results and Discussion
In this section, we have applied the CCNN model to calculate and predict ( ) ch P n and ch n using the available experimental data [26]- [34].
In the present CCNN, we have obtained R 2 = 0.998 and RMSE = 0.000137.   ) and compared with corresponding experimental and theoretical results. Figure 3 shows the energy dependence of the average charged multiplicity in pp collision for s ranging from 30 Gev to 14 TeV. The calculated values are compared with the corresponding experimental and theoretical results. Also, from Figure 3 we notice that ch n increases with the increase of s which shows the same trend as the experiment [26]- [34]. The results of the present open the route into applying modern soft-computing procedures such as neural network into the modeling of HEP.

Conclusions
The charged-particle multiplicity belongs to the simplest observable that provides important insights into the mechanisms of particle production. In the present work we have used CCNN network for modeling the multiplicity distri-