A New Aware-Context Collaborative Filtering Approach by Applying Multivariate Logistic Regression Model into General User Pattern

Traditional collaborative filtering (CF) does not take into account contextual factors such as time, place, companion, environment, etc. which are useful information around users or relevant to recommender application. So, recent aware-context CF takes advantages of such information in order to improve the quality of recommendation. There are three main aware-context approaches: contextual pre-filtering, contextual post-filtering and contextual modeling. Each approach has individual strong points and drawbacks but there is a requirement of steady and fast inference model which supports the aware-context recommendation process. This paper proposes a new approach which discovers multivariate logistic regression model by mining both traditional rating data and contextual data. Logistic model is optimal inference model in response to the binary question “whether or not a user prefers a list of recommendations with regard to contextual condition”. Consequently, such regression model is used as a filter to remove irrelevant items from recommendations. The final list is the best recommendations to be given to users under contextual information. Moreover the searching items space of logistic model is reduced to smaller set of items so-called general user pattern (GUP). GUP supports logistic model to be faster in real-time response.


Introduction
Recent researches on collaborative filtering (CF) focus on inherent information about users and items and how to recommend such relevant items to such users.Database used to build up CF algorithms is in form of rating matrix composed of ratings that users give to items.Additional contextual factors such as time, place, condition and situation existing in real world are not considered in CF algorithms.For instance, if a user prefers to watch news program in morning and movies in evening then contextual information, namely temporary information, should be aware in recommendation tasks and so, it is inappropriate to provide movies to her/him in the morning even though such movies are the most relevant to her/him.
Given a training set which is a rating matrix and a user who requires recommendations, CF algorithm tries to predict rating value on items which are not rated by this user.After that CF algorithm makes a list of such items arranged in order of predictive rating values and recommends this user such list.In other words, CF algorithm constructs a predictive function R2 ( [1], pp.217-250) whose cross domain is the Cartesian product of a set of users U and a set of items I.The domain U × I is also called rating matrix.Co-domain of function R2 is a set of predictive ratings denoted R.

:
Function R2 called traditional 2-dimension (2D) mapping doesn't consider contextual factors now and it can lack information necessary to highly accurate prediction.Suppose contextual information including location, time and companion are added to prediction process, the 2D function R2 becomes the 3-dimension (3D) mapping denoted as below:

3
: According to hierarchical form, the context domain C is defined as a set of contextual dimension K = (K 1 , K 2 , K 3 , •••, K n ).K is represented in hierarchy structure, whose attributes K i (s) are associated with the ascending order of fine level.For example, given attributes K i and K j where I < j, K j is finer than K i and so K i contains K j .It is easy to recognize that K 1 is the coarsest attribute which contains all remaining attributes K 2 , K 3 •••, K n .Each K i contains values at the same level i and it can be split into finer levels.An example of contextual dimension is shown in Figure 1.
In above example, K 2 = {City, Province}, K 3 = (City → District, City → Suburb district, Province → District, Province → Suburb district).According to MD form, context domain C is defined as the Cartesian product of n dimension, For example, suppose C has only one dimension of time denoted D 1 = Time (day of week).So, the cross domain U × I × C of predictive function R3 constitutes a 3D cube: User (name), Item (book name) and Time (day of week).Each block in this cube is assigned a rating which is the predictive outcome of function R3. Figure 2 depicts a MD cube ( [1], p. 227).
There are three approaches ( [1], pp.232-233) to apply context into recommendation process:  Contextual pre-filtering: Firstly, given context c C ∈ is used to select user-item pairs (u, i) which are more relevant to this context, leading to obtain the aware-context cross domain U × I.After that traditional 2D function R2 is taken on such cross domain. Contextual post-filtering: Firstly, traditional 2D function R2 is used to produce the list of recommended item.
After that context C is used to fine-tune this list in order to remove irrelevant items according to concrete context. Contextual modeling: The 3D function R3 is used directly on context-aware cross domain The basic idea of contextual pre-filtering is to project the 3D domain ∏ be projection operation based on condition context c, we have: The concrete context c C ∈ which is strict projection condition can make the cross domain U × I small or sparse, causing low predictive accuracy.So generalization technique is used to make projection condition loose, namely the exact condition c is replaced by the more general condition c'.For example, the context c = "Saturday" which indicates that "Mary prefers to go shopping on Saturday" is replaced by more flexible context c = "Weekend" because she can like going shopping on Sunday if she often goes on Saturday.The general context not only expends the space of potential recommendation solutions but also improve predictive accuracy.
The essence of contextual post-filtering is to fine-tune the raw recommendation results taken from predictive function R2 which didn't consider contextual factors before.Consequently, this method tries to figure out user's context-aware interests, preferences or attributes by using some artificial intelligent and mining techniques and apply such attributes into raw results so as to remove out irrelevant items or change their ranks in final recommendation list.For example, given context c = "evening" which indicates Mary's interest "watching movies in the evening", contextual post-filtering approach will remove all of news or sport events from her recommendation list.
The strong point of contextual pre-filtering and post-filtering approaches is the ability to take advantage of legacy recommendation algorithms not taking account into contextual factors.
The essence of contextual modeling is to incorporate directly contextual information into predictive function R3 and so, R3 is constructed as inference model such as data mining, machine learning, heuristic model or statistical model, etc.The 2D predictive function isn't used in contextual modeling.It implies that the strong point of this method is pure and powerful inference mechanism.It opens a new trend in context-aware recommendation research, giving a lot of prospects although a few related techniques are extensions of 2D algorithms.
The approach in this paper is the hybrid of contextual post-filtering and contextual modeling.Hence a logistic inference model ( [3], pp.372-411) is used as the post filter to make recommendations more relevant to users according to contextual factors.Section 2, which is the main section, describes the proposed approach in detail.The concept of general user pattern (GUP) firstly introduced and then the collaborative filtering algorithm based on GUP and logistic model is mentioned.The equation to solve logistic model, which is the most important feature of the proposed algorithm, is constructed with support of mathematical tools.An example is given at the end of section 2 for illustrating the proposed approach.Section 3 is the conclusion.

Basic Idea and Details of the New Approach
The new approach is suggested by two comments:  Although contextual information is necessary to improve the quality of recommendation process but it cannot replace essential rating information in collaborative filtering research.Inference model taking into account contextual factor should be used as the filter to adjust recommendations returned from predictive ratings so as to give users more appropriate items in concrete circumstances. When additional contextual information is considered, the speed of recommendation process is decreased.
So inference mechanism should be fast in real-time response.The basic idea is to apply a fast inference model, namely logistic regression function into a list of recommended items so as to achieve a better recommendation result under contextual information.The logistic regression model responses immediately the binary request "whether or not a list of items is relevant to concrete context or preferred by users".Because there are various items and each item is associated with an individual regression function, the domain of regression function becomes huge, which decreases the speed of algorithm.In order to solve this problem, the items space is reduced to "general user pattern".
General user pattern (GUP) is known as a set of items to which "many" users give ratings.Since rating values on GUP are not always high, it reflects solely user's access or rating frequency.Given the threshold θ, let n and n j be the number of total users and the number of users who rate on item x j , we have:

{
} where j j j

GUP x n n x I =
So GUP is defined as a set of items where the ratio of the number of total users to the number of users who rate on such each item is great than or equal to a given threshold θ.
Given a GUP, context c and a logistic regression function f, the regressive (or independent) variables of f are taken from GUP.The response of f is binary variable taking values between 0 and 1 where value 1 indicates that it is likely that user prefers GUP under context c and otherwise.Suppose GUP = {x 1 , x 2 , •••, x n }, logistic regression function f (x 1 , x 2 , •••, x n ) can be considered to be dependent on GUP.
Given the raw recommended list R, an instance of GUP is initialized on R.This instance denoted INS is a set of predictive rating values taken from R with condition that respective items co-exist in both R and GUP.For example, if R = {x 1 = 5, x 2 = 3, x 3 = 4} and GUP = {x 1 , x 3 } then INS = {x 1 = 5, x 3 = 4} because item x 1 and item x 3 exist in both R and GUP and their respective values are 5 and 4.
Consequently, regression model f is evaluated on INS with regard to context c C ∈ .If the outcome of f with INS is near to 0 then all items in GUP are removed from R. Finally, the list R is recommended to user after it was fine-tuned by pruning irrelevant items from it.
In general the algorithm has four steps: 1) A 2D predictive function is applied into rating matrix U × I so as to produce a raw list R of recommended items without existence of contextual factors.
2) GUP is discovered over contextual 3D cross domain U × I × C. Items in GUP are frequent items.
3) Multivariate logistic function f is learned from cross domain U × I × C by statistical technique.4) Function f is used to remove irrelevant items from the list R. In other words, only aware-context items are kept in R.So the final filtered list is the best result which is recommended to users.This step includes two substeps: a) The instance of GUP so-called INS is constructed by matching GUP with R. b) Function f is evaluated on INS to perform the removal of redundant items from R.
Because step 3 is the most important, the method to construct multivariate logistic function is discussed in detailed now.The probability that GUP = {x 1 , x 2 , •••, x n } belongs to context c C ∈ is p; in other words, the probability that user prefers GUP under context c is p.The concept odd ([3], p. 11.8) is defined as the ratio of p to 1 − p.This ratio represents how much user likes GUP vice versa how much user dislikes item GUP. ( ) Note that x 1 , x 2 , •••, and x n are regressive or independent variables whose values are obtained from recommended list R later and coefficients α i (s) are called parameters of logistic model.If odd is considered as response or dependent variable, Equation ( 1) is re-written as following: ( ) where exp (•) denotes exponent function.
The probability p that user likes items in GUP is computed according to following function derived from Equation (2) with attention that such probability is logistic regression function Equation ( 3) represents the multivariate logistic model f with regard to a concrete context c.The approach in this paper uses this equation to estimate whether or not a user prefers a list of recommended items based on a concrete context.For example, given user being "John", GUP being {"Gladiator", "Golden Eye"}, recommended movie list being {"Gladiator" with predictive rating 5, "Golden Eye" with predictive rating 4, "Four Rooms" with predictive rating 4} and, if f produces a value greater than or equal to 0.5 when f is evaluated on GUP with regard to time context "evening", then it asserts that John likes such list and there is no film to be removed.This logistic model is built up in offline mode so as not to affect response time.
The problem needs solved now is to determine parameters α i (s) in logistic function f.We will use method of maximum likelihood estimate (MLE) [4] to construct them.Given training data D is rating cube whose dimensions are user, item and context.Each volume in rating cube is quantified by a value that a user rate on an item in concrete context.If rating cube is projected onto a context c, we get a rating matrix D c with two dimensions such as user and item.Suppose D c has m rows and n columns and let y i = (y i1 , y i2 , •••, y in ) be the i th row of D c and it is easy to infer that y ij is the i th instance (rating value) of item x j .Suppose rating values y ij (s) range from 1 to v where v-most favorite and 1-most dislike and the value 0 indicates that user does not rate on item.Suppose GUP also has n items which are the same to ones in D c , let a i = (a i1 , a i2 , •••, a in ) be a possible instance of such n items.Hence, there are (v+1) n such instances because a ij ranges from 0 to v.For example, if n = 2 and v = 2 we have 9 instances (0, 0), (0, 1), (0, 2), (1, 0), (1, 1), (1, 2), (2, 0), (2, 1), (2,2).Let z i be the number of y i in D c so that y i = a i and so there are N = (v+1) n such z i (s).Let p i be an instance of logistic function f evaluated on The logarithm likelihood function of f is: The first order partial derivatives of logarithm likelihood function with regard to α k (s) are: If there is convention that y i0 = 1, we have: Because Equation ( 7) is the set of n + 1 non-linear equations with n + 1 variables, it is easy to find out its solution ( * * * 0 1 , , , n α α α  ) by applying some numerical analysis methods such as Newton-Raphson ( [5], pp.67-79) method.Substituting estimates * * U × I and U × I × C represent context domain, 2D (cross) domain and 3D (cross) domain, respectively.In order words, function R3 gives recommendations to user under circumstances specified as contextual information.Although context has many different types, we can reduce these types into three main types in order to answer three question forms: when, where and who ([1], pp.224-225). Time type indicates the time when user requires recommendation, for example: date, day of week, month, and year. Location type indicates the place where user requires recommendation, for example: theater, coffee house. Companion type indicates the persons with whom user goes or stays when recommendation task is required, such as: alone, friends, girlfriend/boyfriend, family, co-workers.Contextual information is organized in two forms: hierarchical structure ([2], p. 1537) and multi-dimensional data (MD) model.
Note that the odd also expresses how likely GUP belongs to context c C ∈ vice versa how likely GUP does not belongs to context c.The natural logarithm of odd is linear regression function of n variables being GUP.


be estimates of α 0 , α 1 , •••, and α n , respectively.These ( ) * k s α are maximum points of logarithm likelihood function and hence, we set the first order partial derivatives of logarithm likelihood function with regard to [4]is easy to infer that p i is the probability that the instance a i occurs in D c .The likelihood function ([4], p. 4) of f given training data D c is: