Reasoning about Context Information in Cloud Computing Environments

The notion of context provides flexibility and adaptation to cloud computing services. Location, time identity and activity of users are examples of primary context types. The motivation of this paper is to formalize reasoning about context information in cloud computing environments. To formalize such context-aware reasoning, the logic LCM of context-mixture is introduced based on a Gentzen-type sequent calculus for an extended resource-sensitive logic. LCM has a specific inference rule called the context-mixture rule, which can naturally represent a mechanism for merging formulas with context information. Moreover, LCM has a specific modal operator called the sequence modal operator, which can suitably represent context information. The cut-elimination and embedding theorems for LCM are proved, and a fragment of LCM is shown to be decidable. These theoretical results are intended to provide a logical justification of context-aware cloud computing service models such as a flowable service model.


Introduction 1.Contexts in Cloud Computing Environments
The motivation of this paper is to formalize reasoning about context information in cloud computing environments.To formalize such context-aware reasoning, the logic LCM of context-mixture is introduced as a Gentzentype sequent calculus based on linear logic [1,2], which is known to be a useful resource-sensitive logic.LCM has a specific inference rule called the context-mixture rule and a specific modal operator called the sequence modal operator [3,4].The cut-elimination and embedding theorems for LCM are proved as the main results of this paper.A fragment of LCM is also shown to be decidable.These theoretical results are intended to provide a concrete logical justification of context-aware cloud computing service models such as a flowable service model [5,6].
The definitions of cloud computing, including on-demand, pay-by-use, virtualized and dynamically-scalable, imply the characteristics of cloud computing environments [7,8].Cloud-related issues have been discussed and studied based on the notion of contexts which include location, time, identity and activity of users.The use of context is known to be very important in cloud and ubiquitous computing.
There is a widely accepted definition of context [9]: Context is any information that can be used to characterize the situation of an entity.An entity is a person, place, or object that is considered relevant to the interaction between the user and application, including the user and application themselves.
Location, time, identity and activity are primary context types for characterizing the situation of a particular entity.Contexts can be classified into three categories [6]: nature context, human context and culture context.In the present paper, nature context is especially considered.Nature context includes when (time context) and where (location or space context) information.

Flowable Services
Context provides flexibility and adaptation to services.A flowable service, which is a new notion of context-aware cloud computing services, is a logical stream that organizes and provides circumjacent services in such a way that they are perceived by individuals as those naturally embedded in their surrounding environments [5,6,[10][11][12][13][14]. A flow of service is a metaphor for a subconsciously controlled navigation that guides the user through ful-fillment of a flowable service process that fits the user's context and situation and runs smoothly with unbroken continuity in an unobtrusive and supportive way.Flowable services can be useful to context-aware cloud computing applications such as Cloud Campus which is the e-learning environment of Cyber University in Japan.
The original intention of the flowable service model is to apply resources in open cloud environments [5,6,[10][11][12][13][14]. The model uses intensifying context information to adjust services flow to be more usable.The model shares resources or services fairly and to utmost extent.To formalize reasoning about the context-aware flowable service model in open cloud computing environments, we need an appropriate logic that can represent the following three items: 1) Context-mixture rule; 2) Resource-sensitive reasoning; 3) Context information.

Context-Mixture Rule
In this paper, the logic LCM of context-mixture, which can represent the above three items (context-mixture rule, resource-sensitive reasoning and context information), is introduced as a Gentzen-type sequent calculus based on linear logic.LCM has a specific inference rule called the context-mixture rule, which can naturally represent a mechanism for merging formulas with context information.Merging formulas with context information, which represents an interaction between different context information, is required for suitable representation of context-aware flowable services, since to handle various kinds of context information is an important issue for flowable services.We call here such a merging mechanism context-mixture.
The notion of context-mixture is also important for representing deployment models in cloud computing environments [8].Deployment models are classified as private cloud, community cloud, public cloud and hybrid cloud.The cloud infrastructure of hybrid cloud is a "mixture" (or composition) of two or more distinct cloud infrastructures (private, community or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load balancing between clouds).
The context-mixture rule of LCM is of the form: where the multisets 1 and 2   of formulas with context information are mixed by this rule.
The rule (mixture) was introduce in [15], and was called the mingle rule.The name "mingle" was from the originnal version [16] of the mingle rule: This original rule and the corresponding Hilbert-style axiom scheme have been studied in formalizing "relevant" human reasoning [16,17], grammatical reasoning [18] and reasoning about communicationmerge in process algebras [15].
As presented in [15], the context-mixture rule has been used for representing communication-merge in process algebras.This is also justified for the present study, since there is the slogan "context-as-process" presented in [19]: We must consider "context as a part of process of interacting with an ever-changing environment that is composed of reconfigurable, migratory, distributed and multiscale resources" because it is "not simply the state of a predefined environment with a fixed set of interaction resources".

Resource-Sensitive Reasoning
The logic LCM of context-mixture is obtained from linear logic [1,2] by adding the context-mixture rule (mixture) and a sequence modal operator, which represents a sequence of symbols.By the sequence modal operator in LCM, we can appropriately express "context information" in "resource-sensitive reasoning".
The notion of "resources", encompassing concepts such as processor time, memory, cost of components and energy requirements, is fundamental to computational systems.This notion is also very important for handling efficient resource management in cloud computing environments [20].Examples of resources in cloud computing environments include storage, processing, memory, and network bandwidth [8].
Linear logic can elegantly represent the notion of "resources" [1].In linear logic, the concept of "resource consumption" can be represented by using the linear implication connective and the fusion connective, and the concept of "reusable resource" can be represented by using the linear exponential operator .A typical example formula is: This example means "if we spend two coins, then we can have a cup of coffee and as much of water as we like" when the price of coffee is two coins and water is free.It is to be noted that this example cannot be expressed using classical logic, since the formula (two coins) in classical logic is logically equivalent to coin (one coin), i.e., classical logic has no resource-awareness.coin coin 

Context Information
In order to discuss certain real and practical examples, the resource descriptions in linear logic should be more fine-grained and expressive and capable of conveying context information.For example, the following expressions may be necessary for some practical situations: These examples respectively mean "in a teashop, if John spends three coins, then he can have a cup of coffee after two minutes and a cup of water after one minute," and "in a cafeteria, if John expends two coins, then he can have a cup of coffee after one minute."In these examples, the expressions   teashop ; John ,   cafeteria ; John ,   1 min ; 1 min and   1 min , which are regarded as "context information", can naturally be represented by the sequence modal operator in LCM.
As presented in [4], the following expressions are available in a subsystem of LCM: which respectively mean: "if a client sends an incorrect user ID and a correct password to login to a server at the -th login attempt, then the server returns an error message to the client." t "if a server returns the error messages more than twice to a client, then the server returns the password reject message to the client."Note that the error messages are expressed as a "resource" by using the connectives and !, and the "information" on servers, clients, and login-attempts is expressed by the sequence modal operator.

Informational Interpretation
The reason underlying the use of the notion of "sequences" in the sequence modal operator is explained below.The notion of "sequences" is fundamental to practical reasoning in computer science because it can appropriately represent "data sequences", "program-execution sequences", "action sequences", "time sequences" etc.The notion of sequences is thus useful to represent the notions of "information", "attributes", "trees", "or-ders", "preferences" and "ontologies".To represent "context information" by sequences is especially suitable because a sequence structure gives a monoid ,;, M  with informational interpretation [21]: 1) M is a set of pieces of (ordered or prioritized) in-formation (i.e., a set of sequences); 2) ; is a binary operator (on M) that combines two pieces of information (i.e., a concatenation operator on sequences); 3)  is the empty piece of information (i.e., the empty sequence).
Based upon the informational interpretation, a formula of the form   intuitively means that "α is true based on a sequence 1 2 of (ordered or prioritized) information pieces."Further, a formula of the form  , which coincides with α, intuitively means that "α is true without any information (i.e., it is an eternal truth in the sense of classical logic)."

The Logic LCM of Context-Mixture
Prior to the precise discussion, the language of the proposed logic is introduced below.Formulas are constructed from propositional variables, 1 (multiplicative truth constant), (additive truth constant), (additive falsity constant),  (implication), (conjunction), where  is nonempty.It is assumed that the terminological conventions regarding sequents (e.g., antecedent and succedent) are the usual ones.If a sequent S is provable in a sequent calculus L, then such a fact is denoted as or .The parentheses for for any formulas α, β and γ.A rule R of inference is said to be admissible in a sequent calculus L if the following condition is satisfied: for any instance The set of sequences (including the empty sequence) is denoted as SE.An The cut rule of LCM is of the form: The context-mixture rule of LCM is of the form: The sequence rules of LCM are of the form: The logical inference rules of LCM are of the form: It is remarked that Girard's intuitionistic linear logic ILL is a subsystem of LCM: It is obtained from LCM by deleting (mixture) and the sequence modal operators.
The sequents of the form ˆd d for any formula α are provable in cut-free LCM.This fact is shown by induction on α.
The (possibly empty) multiset expression  in (mixture) is needed to show the cut-elimination theorem for an extended linear logic with (mixture) [15].
Proposition 2.3.The following rules are admissible in cut-free LCM.
Straightforward.Here, we show only for the rule (   b regu) by induction on the proofs P of    in cut-free LCM.We distinguish the cases according to the last inference of P. We show only the following cases.

Case (
): The last inference of P is of the d      1 form: . In this case, is also an Case (  left): The last inference of P is of the form: By induction hypothesis, we obtain: We then obtain the required fact: and    .Proposition 2.4.The following sequents are provable in cut-free LCM: for any formulas , The logic LM is equivalent to a logic introduced in [15].Indeed, LM is equivalent to the -free fragment of MILLm [15] where is the multiplicative falsity constant.As shown in [15], the cut-elimination theorem holds for LM.This fact will be used to show the cutelimination theorem for LCM.The fact that the !-free fragment LM 0 0  of LM is decidable will also be used to show the decidability of the -free fragment LCM ! of LCM.

Some Results on LCM
Definition  , and !. , , , , , , L to is defined by: L 1) for any p   ,   ˆ: Let  be a set of formulas in s  .Then, an expression   f  means the result of replacing every occurrence of a formula α in  by an occurrence of   f  .Theorem 3.3.(Embedding) Let be a multiset of formulas in  s  , γ be a formula in s  , and f be the mapping defined in Definition 3.2.Then: Proof. (  ): By induction on the proofs P of    in LCM.We distinguish the cases according to the last inference of P. We show some cases.
by the definition of f, we obtain the required fact Case (mixture): The last inference of P is of the form: By induction hypothesis, we have . Then, we obtain the required fact: The last inference of P is of the form: By induction hypothesis, we have . Then, we obtain the required fact: by the definition of f.
Case (; left): The last inference of P is of the form: By induction hypothesis, we have . Then, we obtain the required fact, since by the definition of f.

    : By induction on the proofs
  in LM.We distinguish the cases according to the last inference of Q.We show some cases.
Case : The last inference of Q is of the form: by the definition of f.By induction hypothesis, we have LCM . Then, we obtain the required fact: Case (cut): The last inference of Q is of the form: Proof.We have the following modified statements of Theorem 3.3: To show the second statement, we do not need to prove the case for (cut) as in Theorem 3.3.
We now prove the cut-elimination theorem for LCM as follows.Suppose LCM     . Then, we have by the modified statement 1) of Theorem 3.3, and hence .■ Corollary 3.5.(Consistency) LCM is consistent, i.e., the empty sequent is not provable in cut-free LCM.


In the following, we show that the !-free fragment LCM  of LCM is decidable.Before to show the decidability of LCM  , we mention that LCM is undecidable.ILL is known to be undecidable.The proof of the undecidability of ILL is carried out by encoding Minsky machine.LCM can encode Minsky machine in the same way as in ILL, since LCM is an extension of ILL.

Conclusions
In this paper, the logic LCM of context-mixture, which can suitably express context information in cloud computting environments, was introduced.The cut-elimination and embedding theorems for LCM were proved, and the !-free fragment of LCM was shown to be decidable.LCM is based on an extended resource-sensitive (intuitionistic linear) logic with both the context-mixture rule (mixture) and the sequence modal operator [b].The rule (mixture) of LCM can suitably represent a mechanism for merging formulas with context information, and the operator [b] of LCM can represent context information.A concrete logical foundation of reasoning about context information in cloud computing environments was thus obtained in this paper.Some technical remarks on LCM and some related works on context-aware modeling are addressed in the rest of this paper.
It is remarked that the sequence modal operator in LCM can be adapted to a wide range of non-classical logics.An extended intuitionistic linear logic with the sequence modal operator but without the context-mixture rule was shown to be useful for describing secure password authentication protocols [4].An extended full computation-tree logic with the sequence modal operator was shown to be applicable to certain ontological descriptions [3].An extended linear-time temporal logic with the sequence modal operator was shown to be useful for specifying some time-dependent secure authentication systems [22,23].The sequence modal operator may be applicable to other useful non-classical logics, e.g., some extended linear logics [24,25].
The present paper was intended to provide a logical justification of context-aware cloud computing service models (such as a flowable service model) in cloud computing environments.We now give a survey of such context-aware model approaches.Context is used to challenge various issues in cloud and ubiquitous environments.Many context models have been proposed and developed: A key-value model, a markup model, an object-oriented model, and an ontology-based model (see [26] for a survey).Since location is one of the most typical context information, the location context involves special models: Geometric models, symbolic models and hybrid models.Wohltorf et al. [27] introduced a context-awareness module which combines three sub-modules: The location-based service module, the personalization module and the device and network independence module.This work introduced an agent-based serviceware framework to assist service providers in developing innovative services.Gu et al. [28] proposed an ontologybased context model which is based on the OWL inside the SOCAM (Service-Oriented Context-Aware Middleware) architecture.Coutaz et al. [19] proposed a conceptual framework for context-aware computing, including ontological and architectural foundations.In this method, the context is modeled as a directed state graph, where the nodes denote contexts and the edges denote the conditions for changes in contexts.Macedo et al. [29] developed a distributed information repository for automatic context-aware MANETs in order to adapt the multimedia context-rich application into service computing.Feug et al. [30] conceived a shared situation awareness model that supplies each user with agents for focused and customized decision support according to the user's context.
, ! (exponential), and    b (sequence modal operator) where is a sequence.Sequences are constructed from atomic sequences, b  (empty sequence) and; (composition).Lower-case letters are used for sequences, lower-case letters are used for propositional variables, Greek lower-case letters used for formulas, and Greek capital letters    are used for finite (possibly empty) multisets of formulas.For any     # !, b  , an expression # is used to denote the multiset   #    .The symbol  is used to denote the equality of sequences (or multisets) of symbols.An expression    means  , and expressions   b  .A sequent is an expression of the form   

Definition 2 . 1 .
of R, if for all i, then .iFormulas and sequences are defined by the following grammar, assuming p and e represent propositional variables and atomic sequences, respectively: The rule (cut) is admissible in cut-free LCM.
the cut-elimination theorem for LM.By the modified statement 2) of Theorem 3.3, we obtain Definition 3.6.LCM  is obtained from LCM by deleting {(!left), (!right), (!co), (!we)}, i.e., LCM  is the -free fragment of LCM.The provability of LCM can be transformed into that of LM   by the restriction of Theorem 3.3.Since LM  is decidable, LCM is also decidable.■ ! Definition 3.7.LM  is obtained from LCM  by  Proof.