_{1}

^{*}

Traditional collaborative filtering (CF) does not take into account contextual factors such as time, place, companion, environment, etc. which are useful information around users or relevant to recommender application. So, recent aware-context CF takes advantages of such information in order to improve the quality of recommendation. There are three main aware-context approaches: contextual pre-filtering, contextual post-filtering and contextual modeling. Each approach has individual strong points and drawbacks but there is a requirement of steady and fast inference model which supports the aware-context recommendation process. This paper proposes a new approach which discovers multivariate logistic regression model by mining both traditional rating data and contextual data. Logistic model is optimal inference model in response to the binary question “whether or not a user prefers a list of recommendations with regard to contextual condition”. Consequently, such regression model is used as a filter to remove irrelevant items from recommendations. The final list is the best recommendations to be given to users under contextual information. Moreover the searching items space of logistic model is reduced to smaller set of items so-called general user pattern (GUP). GUP supports logistic model to be faster in real-time response.

Recent researches on collaborative filtering (CF) focus on inherent information about users and items and how to recommend such relevant items to such users. Database used to build up CF algorithms is in form of rating matrix composed of ratings that users give to items. Additional contextual factors such as time, place, condition and situation existing in real world are not considered in CF algorithms. For instance, if a user prefers to watch news program in morning and movies in evening then contextual information, namely temporary information, should be aware in recommendation tasks and so, it is inappropriate to provide movies to her/him in the morning even though such movies are the most relevant to her/him.

Given a training set which is a rating matrix and a user who requires recommendations, CF algorithm tries to predict rating value on items which are not rated by this user. After that CF algorithm makes a list of such items arranged in order of predictive rating values and recommends this user such list. In other words, CF algorithm constructs a predictive function R2 ( [

Function R2 called traditional 2-dimension (2D) mapping doesn’t consider contextual factors now and it can lack information necessary to highly accurate prediction. Suppose contextual information including location, time and companion are added to prediction process, the 2D function R2 becomes the 3-dimension (3D) mapping denoted as below:

Note that C, U × I and U × I × C represent context domain, 2D (cross) domain and 3D (cross) domain, respectively. In order words, function R3 gives recommendations to user under circumstances specified as contextual information.

Although context has many different types, we can reduce these types into three main types in order to answer three question forms: when, where and who ( [

Time type indicates the time when user requires recommendation, for example: date, day of week, month, and year.

Location type indicates the place where user requires recommendation, for example: theater, coffee house.

Companion type indicates the persons with whom user goes or stays when recommendation task is required, such as: alone, friends, girlfriend/boyfriend, family, co-workers.

Contextual information is organized in two forms: hierarchical structure ( [

According to hierarchical form, the context domain C is defined as a set of contextual dimension K = (K^{1}, K^{2}, K^{3}, ∙∙∙, K^{n}). K is represented in hierarchy structure, whose attributes K^{i}(s) are associated with the ascending order of fine level. For example, given attributes K^{i} and K^{j} where I < j, K^{j} is finer than K^{i} and so K^{i} contains K^{j}. It is easy to recognize that K^{1} is the coarsest attribute which contains all remaining attributes K^{2}, K^{3} ∙∙∙, K^{n}. Each K^{i} contains values at the same level i and it can be split into finer levels. An example of contextual dimension is shown in

In above example, K^{2} = {City, Province}, K^{3} = (City → District, City → Suburb district, Province → District, Province → Suburb district).

According to MD form, context domain C is defined as the Cartesian product of n dimension, C = D_{1} × D_{2} × ∙∙∙ × D_{n}. Each dimension D_{i}, in turn, is a set of attributes, D_{i} = (a_{i}_{1}, a_{i}_{2}, ∙∙∙, a_{ik}). For example, suppose C has only one dimension of time denoted D_{1} = Time (day of week). So, the cross domain U × I × C of predictive function R3 constitutes a 3D cube: User (name), Item (book name) and Time (day of week). Each block in this cube is assigned a rating which is the predictive outcome of function R3.

There are three approaches ( [

Contextual pre-filtering: Firstly, given context

Contextual post-filtering: Firstly, traditional 2D function R2 is used to produce the list of recommended item. After that context C is used to fine-tune this list in order to remove irrelevant items according to concrete context.

Contextual modeling: The 3D function R3 is used directly on context-aware cross domain U × I × C.

The basic idea of contextual pre-filtering is to project the 3D domain U × I × C on 2D plane, based a concrete context

The concrete context

The essence of contextual post-filtering is to fine-tune the raw recommendation results taken from predictive function R2 which didn’t consider contextual factors before. Consequently, this method tries to figure out user’s context-aware interests, preferences or attributes by using some artificial intelligent and mining techniques and apply such attributes into raw results so as to remove out irrelevant items or change their ranks in final recommendation list. For example, given context c = “evening” which indicates Mary’s interest “watching movies in the evening”, contextual post-filtering approach will remove all of news or sport events from her recommendation list.

The strong point of contextual pre-filtering and post-filtering approaches is the ability to take advantage of legacy recommendation algorithms not taking account into contextual factors.

The essence of contextual modeling is to incorporate directly contextual information into predictive function R3 and so, R3 is constructed as inference model such as data mining, machine learning, heuristic model or statistical model, etc. The 2D predictive function isn’t used in contextual modeling. It implies that the strong

point of this method is pure and powerful inference mechanism. It opens a new trend in context-aware recommendation research, giving a lot of prospects although a few related techniques are extensions of 2D algorithms.

The approach in this paper is the hybrid of contextual post-filtering and contextual modeling. Hence a logistic inference model ( [

The new approach is suggested by two comments:

Although contextual information is necessary to improve the quality of recommendation process but it cannot replace essential rating information in collaborative filtering research. Inference model taking into account contextual factor should be used as the filter to adjust recommendations returned from predictive ratings so as to give users more appropriate items in concrete circumstances.

When additional contextual information is considered, the speed of recommendation process is decreased. So inference mechanism should be fast in real-time response.

The basic idea is to apply a fast inference model, namely logistic regression function into a list of recommended items so as to achieve a better recommendation result under contextual information. The logistic regression model responses immediately the binary request “whether or not a list of items is relevant to concrete context or preferred by users”. Because there are various items and each item is associated with an individual regression function, the domain of regression function becomes huge, which decreases the speed of algorithm. In order to solve this problem, the items space is reduced to “general user pattern”.

General user pattern (GUP) is known as a set of items to which “many” users give ratings. Since rating values on GUP are not always high, it reflects solely user’s access or rating frequency. Given the threshold θ, let n and n_{j} be the number of total users and the number of users who rate on item x_{j}, we have:

So GUP is defined as a set of items where the ratio of the number of total users to the number of users who rate on such each item is great than or equal to a given threshold θ.

Given a GUP, context c and a logistic regression function f, the regressive (or independent) variables of f are taken from GUP. The response of f is binary variable taking values between 0 and 1 where value 1 indicates that it is likely that user prefers GUP under context c and otherwise. Suppose GUP = {x_{1}, x_{2}, ∙∙∙, x_{n}}, logistic regression function f (x_{1}, x_{2}, ∙∙∙, x_{n}) can be considered to be dependent on GUP.

Given the raw recommended list R, an instance of GUP is initialized on R. This instance denoted INS is a set of predictive rating values taken from R with condition that respective items co-exist in both R and GUP. For example, if R = {x_{1} = 5, x_{2} = 3, x_{3} = 4} and GUP = {x_{1}, x_{3}} then INS = {x_{1} = 5, x_{3} = 4} because item x_{1} and item x_{3} exist in both R and GUP and their respective values are 5 and 4.

Consequently, regression model f is evaluated on INS with regard to context

In general the algorithm has four steps:

1) A 2D predictive function is applied into rating matrix U × I so as to produce a raw list R of recommended items without existence of contextual factors.

2) GUP is discovered over contextual 3D cross domain U × I × C. Items in GUP are frequent items.

3) Multivariate logistic function f is learned from cross domain U × I × C by statistical technique.

4) Function f is used to remove irrelevant items from the list R. In other words, only aware-context items are kept in R. So the final filtered list is the best result which is recommended to users. This step includes two sub- steps:

a) The instance of GUP so-called INS is constructed by matching GUP with R.

b) Function f is evaluated on INS to perform the removal of redundant items from R.

Because step 3 is the most important, the method to construct multivariate logistic function is discussed in detailed now. The probability that GUP = {x_{1}, x_{2}, ∙∙∙, x_{n}} belongs to context

Note that the odd also expresses how likely GUP belongs to context

Note that x_{1}, x_{2}, ∙∙∙, and x_{n} are regressive or independent variables whose values are obtained from recommended list R later and coefficients α_{i}(s) are called parameters of logistic model. If odd is considered as response or dependent variable, Equation (1) is re-written as following:

where exp (∙) denotes exponent function.

The probability p that user likes items in GUP is computed according to following function derived from Equation (2) with attention that such probability is logistic regression function f (x_{1}, x_{2}, ∙∙∙, x_{n}).

Equation (3) represents the multivariate logistic model f with regard to a concrete context c. The approach in this paper uses this equation to estimate whether or not a user prefers a list of recommended items based on a concrete context. For example, given user being “John”, GUP being {“Gladiator”, “Golden Eye”}, recommended movie list being {“Gladiator” with predictive rating 5, “Golden Eye” with predictive rating 4, “Four Rooms” with predictive rating 4} and, if f produces a value greater than or equal to 0.5 when f is evaluated on GUP with regard to time context “evening”, then it asserts that John likes such list and there is no film to be removed. This logistic model is built up in offline mode so as not to affect response time.

The problem needs solved now is to determine parameters α_{i}(s) in logistic function f. We will use method of maximum likelihood estimate (MLE) [_{c} with two dimensions such as user and item. Suppose D_{c} has m rows and n columns and let y_{i} = (y_{i}_{1}, y_{i}_{2}, ∙∙∙, y_{in}) be the i^{th} row of D_{c} and it is easy to infer that y_{ij} is the i^{th} instance (rating value) of item x_{j}. Suppose rating values y_{ij}(s) range from 1 to v where v-most favorite and 1-most dislike and the value 0 indicates that user does not rate on item. Suppose GUP also has n items which are the same to ones in D_{c}, let a_{i} = (a_{i}_{1}, a_{i}_{2}, ∙∙∙, a_{in}) be a possible instance of such n items. Hence, there are (v+1)^{n} such instances because a_{ij} ranges from 0 to v. For example, if n = 2 and v = 2 we have 9 instances (0, 0), (0, 1), (0, 2), (1, 0), (1, 1), (1, 2), (2, 0), (2, 1), (2, 2). Let z_{i} be the number of y_{i} in D_{c} so that y_{i} = a_{i} and so there are N = (v+1)^{n} such z_{i}(s). Let p_{i} be an instance of logistic function f evaluated on x_{1} = y_{i}_{1}, x_{2} = y_{i}_{2}, ∙∙∙, x_{n} = y_{in}, we have:

It is easy to infer that p_{i} is the probability that the instance a_{i} occurs in D_{c}. The likelihood function ( [_{c} is:

The logarithm likelihood function of f is:

The first order partial derivatives of logarithm likelihood function with regard to α_{k}(s) are:

If there is convention that y_{i}_{0} = 1, we have:

Let _{0}, α_{1}, ∙∙∙, and α_{n}, respectively. These

Because Equation (7) is the set of n + 1 non-linear equations with n + 1 variables, it is easy to find out its solution (

For example, suppose there are 2 items, GUP= {x_{1}, x_{2}} receiving binary values such as 0-dislike and 1-like. So, we have n = 2, v = 1 and 4 possible instances y_{1} = (y_{11} = 0, y_{12} = 0), y_{2} = (y_{21} = 0, y_{22} = 1), y_{3} = (y_{31} = 1, y_{32} = 0), y_{4} = (y_{41} = 1, y_{42} = 1). According to Equation (2), instances of logistic function f evaluated on y_{1}, y_{2}, y_{3}, and y_{4} are:

Suppose only instance y_{1} = (y_{11} = 0, y_{12} = 0) is observed and so we have z_{1} = 1, z_{2} = z_{3} = z_{4} = 0. The Equation (7) becomes:

It is necessary to solve Equation (7) with regard to coefficients α_{0}, α_{1}, and α_{2}. Suppose α_{0} = 0 and α_{2} = α_{1}, we have:

By using software Mathematica [

where

Finally, probabilities p_{1}, p_{2}, and p_{3} are determined by substituting complex solutions

where

If GUP gets the instance y_{11} = (x_{1} = 0, x_{2} = 0), the logistic probability p_{1} = f (x_{1} = 0, x_{2} = 0) = 0.5 which leads to conclude that user does not like such two items.

The approach in this paper is the hybrid of contextual post-filtering and contextual modeling where logistic model is applied in the post stage of recommendation process. The thinking behind this approach is that rating values obtained explicitly by questionnaires or implicitly by inferring users’ behaviors are the most important and contextual factor around users or related to application is additional information which is useful but not essential. Comparing to contextual pre-filtering, this approach restricts the loss of rating information in rating matrix by ignoring data pre-filtering. At the post stage, this approach removes only items which are asserted that users do not like them under contextual condition. Such assertion is the outcome of steady inference model, namely logistic model.

The removal restriction increases recall metric due to reserving solutions space but doesn’t lessen precision metric in comparison of pre-filtering method. Comparing to traditional or post-filtering method, this approach is more accurate because of the stead inference mechanism of logistic function when logistic model is appropriate to binary request such as following yes/no question “whether or not Mary prefers to browse commercial websites in the evening”. Moreover this approach exploits the relationship among items in general user pattern, which is necessary to recommendation process but is not considered in contextual pre-filtering or post-filtering.

I express my deep gratitude to Prof. Dr. Ho, Hang T. T., Vinh Long General Hospital, Vietnam Ministry of Health who funded me to complete and publish this research.

Loc Nguyen, (2016) A New Aware-Context Collaborative Filtering Approach by Applying Multivariate Logistic Regression Model into General User Pattern. Journal of Data Analysis and Information Processing,04,124-131. doi: 10.4236/jdaip.2016.43011