Home: Office Activity Awareness
  Application: Productivity Assessment  
Helping
with
productivity
assessment
So we can accurately detect activities in an office. How can we apply this capability in the office? One intuition is to apply it to productivity assessment. Franklin & Covey's time management program suggests an exercise where a person records his/her own activities every 15 minutes for a whole week. At the end of the week, the person analyzes his record to find patterns and habits that may need to be changed. However, recording one's activities and productivity levels can be very tedious and can detract from work. Using an activity detection system can help by automatically recording a person's activity and correlating it with productivity.

Another benefit of applying an activity detection system to productivity assessment is that a computer system can record and store activities non-stop for weeks, months, and years. Data for that amount of time can be a great resource for analysis and information that can provide a comprehensive understanding of a person's behavior and habits.

In this part of the project, we try to answer the following questions:
  Can we measure productivity by looking at activities?
  How aware are people of their own productivity?
 
Procedure The term "productivity" is a loaded term that can be very difficult to quantify or measure. People can be doing a lot of work, but say that they are unproductive, because they are not making any progress on their task. For the deployment of the system I defined "productivity" as the percentage of time spent actively engaged in a work-related task.

I used an "experience sampling" technique to get the participant's current level of productivity. In this technique, the user documents their level of productivity at certain times throughout the day. In the deployment, the user is alerted every 15 minutes to write down their level of "productivity".

The user is provided a sheet to rate their productivity from 0% to 100%. On the sheet, the measure of productivity is stated as: "What percentage of the past 15 minutes did you spend actively engaged in a work-related task?"

For the timer, I created a Flash application that plays a bird sound at every 15 minutes. By playing a bird sound, the reminder is more pleasant and less annoying.


click to hear bird sound
Figure1. The timer used to remind users to record their current productivity level.
 
Results Can we predict productivity given we know how often a particular activity is being performed? For example, how productive is a person when he is sitting 75% of the time and outside the office 25% of the time? To answer this, I aggregated all the activity labels within 15 minute intervals. The aggregated levels (percent outside, percent sitting, percent sitting and talking, percent walking) represented the percentage of time spent performing a particular activity. I used these values to predict the "productivity" levels recorded by the users.

I ran various machine learning algorithms (Bayes Net, Naive Bayes, Logistic Regression, Decision Trees, and Naive Bayes Trees) on the percentage of time spent performing a particular activity to predict whether a person has been productive or not. I analyzed the sets of features for different permutations of participants.
Recall
Participant Accuracy Classifier Productive Not productive
All 62.11% Logistic Regression 0.792 0.447
Profs only 64.62% Logistic Regression 0.788 0.5
Students only 56.67% Logistic Regression 0.533 0.6
w/o student 1 65.79% Logistic Regression 0.795 0.514
w/o student 2 63.10% Logistic Regression 0.857 0.405
w/o prof 1 68.18% Logistic Regression 0 0.978
w/o prof 2 76.27% Logistic Regression 0.93 0.313
Table 1. The best accuracies in predicting productivity using percentage of time spent on a particular activity.

The table above shows that whether a person is productive or not can be somewhat determined using information about the percentage of time doing a particular activity. The accuracy is not staggering and this can be attributed to the limited amount of data.

Notice that using all the data, we get a classification accuracy of 62.11%. If we only look at professors only, we get an increase in accuracy of about 2.5%. While if we only look at students, we get a decrease of about 8%. This difference can be attributed to the variability of concepts of productivity between people. With more data collection, we can train classifiers for each person which may eliminate the problem of variability.
Accuracy
(Difference)
Recall
Participant Classifier Productive Not productive
All 65.77%
(+3.66%)
Logistic Regression 0.642 0.672
Profs only 73.97%
(+9.35%)
Logistic Regression 0.697 0.775
Students only 63.16%
(+6.49%)
Decision Tree 0.65 0.611
w/o student 1 65.22%
(-0.57%)
Naïve Bayes Tree 0.955 0.375
w/o student 2 71.74%
(+8.64%)
Logistic Regression 0.714 0.72
w/o prof 1 68.29%
(+0.11%)
Logistic Regression 0.32 0.842
w/o prof 2 73.13%
(-3.14%)
Decision Tree 0.833 0.474
Table 2. The best accuracies in predicting productivity using the features from the environment and computer sensors directly.

The table above shows that it is slightly better to use the raw features to determine whether a person is productive or not. Although this result ran counter to my intuition that knowledge of activity can help with determining productivity, it is still an interesting result that can be further explored with a longer deployment of the system. Notice that in some cases (Profs only and w/o student 1), we are getting improvements of almost 10 percent.