NACE Logo NACE Center Logo
National Association of Colleges and Employers NACE Center for Career Development and Talent Acquisition®
mobile menu
  • Ph.D. Not Required: Using Data to Maximize Effectiveness

    November 14, 2017 | By William Jones and Cedric Headley

    Best Practices

    TAGS: operations, program development, journal

    NACE Journal, November 2017

    Data can help you address complicated questions with simple answers.

    Charles Babbage, considered by many to be the father of the computer, and Dr. Seuss walked into a NACE conference mixer one evening. Babbage says to the doctor, “Errors using inadequate data are much less than those using no data at all.” Dr. Seuss had a cheeky response: “Sometimes the questions are complicated and the answers are simple.”

    While these two historical figures never actually met—let alone applied for professional development support to go to a NACE conference—their actual quotes provide key messages that are relevant to how we can view data within career services.

    On three occasions over the last two years, we have had the opportunity and honor to present at various national and regional conferences on how Rutgers University – New Brunswick collects and uses data to maximize effectiveness. Judging from the crowds that participated in these sessions, data certainly continues to be of importance to our profession. Incoming students and their parents request post-graduation survey data to understand return on investment, and deans seek out this information for accreditation purposes. Data are useful when evidence is needed to combat dwindling resources or to show the world that we do indeed facilitate important work that impacts the lives of our students. We continue to collect tracking data for all sorts of purposes and include this information in annual reports each year. Many of us seem to have a handle on why it is important to collect and report on this data, even if it is just because someone important requests it of us. So, we won’t focus on these types of items in this article. Instead, we will focus on how to move from creating exciting infographics to using data to impact decisions regarding some of the day-to-day challenges we all face.

    We would like to make note of a few items: Data can be dry, and, in our presentations, we try to mitigate that by being entertaining. It is extremely difficult to come across as hilarious in writing as we do in our presentations. It is also difficult in written form to provide the wealth of examples of everything we will discuss. This is why we are providing a tool kit that will include the same samples that we go over in our live presentations: Please see careers.rutgers.edu/NACE for the kit.

    If we haven’t scared you away, read on to see how we went about answering the following questions at Rutgers University:

    1. How do we increase student attendance at workshops while decreasing staff workloads?
    2. How do we increase student applications to positions in our posting system?
    3. How do we determine employer development priorities?
    4. How do we accurately generalize survey results to the entire student population?

    Using Data From the Career Management Platform

    The first part of this article will focus on how to use data from your career management platform to answer the first three questions posed above. We all collect a lot of data within our platforms, but what do we do with all of this information? How can we use it to make more-informed decisions? And, how can we use it to increase effectiveness?

    Increasing attendance while decreasing workloads: When we arrived at Rutgers more than four years ago, we were asked to solve the problems of dwindling program attendance and increasing staff programming burnout.

    Let’s walk through our process to answer this particular challenge:

    • The first thing we decided to do was create a cross-functional programming committee composed of service providers as well as our marketing and assessment staff. Every semester, this committee meets to develop the programming calendar for the following term.
    • During these meetings, we pull attendance data from our career management system for each program from the prior semester. If a particular program continually has low attendance, we ask ourselves if the content of that program is relevant to our core mission and important for the student to know.
    • If the answer is “no,” we get rid of the program—something that many offices find difficult to do for a variety of reasons. However, if the answer to the question is “yes,” we also delete the stand-alone program from our roster, but we incorporate its information into more popular programs, such as networking events and career fair success events, or even career conferences that tend to draw larger crowds.

    The committee also uses student interest data pulled from our career management platform. We added questions to the student profile to collect data on the type of career-related topics students were most interested in, as well as questions related to a student’s career interests. (Note: At Rutgers, we call them “career interest clusters.”) As a result, we now have some useful data that help us make more-informed decisions regarding what programs we should focus on in the upcoming semester based on what students actually want.

    Over the past three years of using this simple process, we have been able to increase student participation in our programs by 130 percent, while decreasing the number of workshops our staff have to organize by 13 percent. As Dr. Seuss noted, the question may be complicated—how to increase student participation while decreasing the workload—but the answer can be relatively simple: Use all the information that you are already collecting to make a decision about the programs you coordinate. (Please note: See our online tool kit at careers.rutgers.edu/NACE for our EACE presentation, which illustrates how we added questions to our system and ran reports full of useful data.)

    Increasing student applications to our posting system: As is the case for many at other institutions, we heard from employers that their postings were not getting the results they wanted and expected. In response, we set up job and internship blast e-mails based on the student’s major; however, with more employers becoming major “agnostic” (actually a good trend!), these e-mails were not having the intended impact. The challenge for us was to find a way to actively promote newly posted positions in our system that were of actual interest to students.

    What we decided to do was relatively simple: We added a question to our job posting form that asked employers to identify the “position cluster” to which their posting belonged. Similar to a job function, the “position cluster” is available to employers through a pick list that exactly mirrors the career interest clusters that we have asked students to define.

    As a result, we now have two data sets that are really powerful for the purposes of our job and internship e-mail blasts: 1) the career interests of all the students who are in our system regardless of the student’s major; and 2) position cluster information from employers that allows us to identify various postings based on job function (position cluster) rather than major or industry. For example, if a student selected the “arts, communications, and entertainment” cluster, then he or she receives recently posted positions related to this cluster, irrespective of the employer’s industry or submitted academic major category. An added benefit: This also solves the issue of students receiving notices about irrelevant positions due to “all majors” postings.

    We also took this one step further by incorporating the student’s class level into the e-mail configuration. First-year and sophomore students within a particular cluster receive only recently posted part-time and internship positions; juniors receive internship and full-time job listings; seniors receive only full-time positions.

    The question seemed complicated, but the integration of data made the solution relatively simple. And the results were mind-blowing. Four years ago, our major-centric job and internship blasts were not having the intended effect of increasing applications to our job postings. Since the change, we have seen a 58 percent increase in student applications to employer postings in our system over the past three years. More importantly, over this same period of time, we have witnessed a 24 percent increase in the overall number of graduating students reporting that our posting system positively contributed to their post-graduation success; moreover, the majority of our students are educated through the School of Arts & Sciences, and we saw a 31 percent increase in the number of these graduates citing our posting system as having contributed to their post-graduation success. One number that we are happy to see decrease is the “unsubscribe” rate for these types of e-mails; once as high as the teens, the rate currently stands at 6.5 percent.  You can view a sample of our job and internship blast e-mails, including html source code, in the online tool kit at careers.rutgers.edu/NACE.

    Determining employer development priorities: Resources are finite, so you can’t be everything for everyone. We have to accept that this statement is reality and that priorities must be identified. Certainly, this is the case with employer development efforts.
    A few years ago, we were asked to help our employer relations unit determine employer development priorities. The question was hard, but the answer was relatively simple: Determine where your areas of greatest need are, and where you can get the most bang for your proverbial buck.

    The first thing that we did to tackle this challenge was to meet with our employer relations leadership to determine their focus. That very first year, they were interested in a plan based on various opportunities broken down by major categories, e.g., engineering, business, liberal arts; in later years, after we changed our focus away from major, we focused more on the career interest clusters.

    We ran a report on the total number of positions posted the previous year, broken down by the major the employer was targeting; we also pulled the total number of students in that particular major category, based on the student profile information housed in our career management platform. (In subsequent years, we used the position clusters and career interest clusters for the report.) This provided us with two powerful data points that allowed us to perform a position gap analysis to identify areas of greatest need. A position gap analysis compares the percentage of positions available in a particular category (major preferred or position career cluster) to the percentage of students within that same category. If there is a large gap between the percentage of positions and percentage of students, then that is an area that needs greater focus. If a large negative gap appears, then that means we have too few positions for the category in proportion to the student population interested in those types of positions. In the case of a large positive gap, the opposite is true. For example, we realized that students who were part of the “business, financial services, and logistics” career cluster had a much higher positive percentage of employment opportunities that fell within that category compared to the students within the “arts, communications, and entertainment” career cluster. Therefore, we focused more of our employer development resources on increasing opportunities for the students in the “arts, communications, and entertainment” cluster. The question seemed complicated, but the solution to tackle the problem was relatively simple. See the online tool kit (careers.rutgers.edu/NACE) for an example of a mock position gap analysis (no real Rutgers data were used); it includes Excel formulas designed to help you plug your data into the grid and determine the size of your potential position gaps.

    Statistics Without Fears or Tears

    Recall Babbage’s statement at the beginning of the article, which, essentially, says that using even just a single data point is much better than guessing.

    But, what do you do when you don’t currently have any usable data at all? What if you want to make decisions based on the preferences of students who are not currently connected to you?

    The typical response would be to create a survey, and this answer is partially correct. Where we tend to err in the process is with being sure that the results are accurately representative of the overall population. To avoid this, we use a statistical approach to surveying students that allows us to generalize the results to our entire population. This statistical procedure, which is how we answer our fourth question, is relatively simple: Call it “statistics without fears or tears.” (Note: Our online tool kit offers tools to assist you with much of the statistical part of this process relating to selecting your sample of students to complete a survey.)

    At Rutgers University, we have used a stratified random sampling method of collecting responses to make decisions about time of day to have a career fair, which e-mail address students are more likely to open, the preferred time to have advising sessions and workshops, and more. The results we obtained from this sampling method were generalizable to the overall population, and decisions were more confidently made based on data with a much higher level of accuracy. There are many random sampling techniques that statisticians use, but the stratified random sampling (SRS) method was just right for us.

    Use the following steps to obtain a stratified random sample of responses that can be generalized to the larger population of interest:

    1. identify your population of interest;
    2. determine the sub-groups to divide the population of interest;
    3. determine the sample size;
    4. select a random sample from each sub-group; and
    5. calculate percentages.

    Step 1) Identify your population of interest: For the purpose of this article, we are defining our population of interest as all currently enrolled undergraduate students at Rutgers. A list of all the students from your population of interest is needed for an SRS. This information may be readily available in most of your career management platforms (if you automatically import your data) or from the registrar’s office. For illustration purposes, let us assume that we are working with a list of 50,000 undergraduate students in an Excel document as our population of interest.

    Step 2) Determine the sub-groups within the population of interest: Sub-groups are important, as various aspects of a population may have differing views or responses that need to be represented in your sample. But, we can create homogeneous sub-groups based on the data we would like to capture. For example, if you were attempting to assess how students felt about your department’s diversity programming, then you might want to divide (or stratify) your population into certain ethnic or gender sub-groups. In our case, we divided the undergraduate population by class level sub-groups (i.e., first year, sophomore, junior, and senior). Note that we also divided those class level sub-groups by academic school as we know perceptions of our services may vary by the students’ academic class level and academic school. However, for this example we are going to focus on just the academic class levels.

    Step 3) Determine the sample size needed based on the population: Based on our population of interest—50,000 undergraduate students—surveying all students with an acceptable response rate would be time consuming and too labor intensive. In order to avoid this, we will use a sample of the population instead.

    Determining the required sample size for an SRS can be a complex task. However, the process we will describe is a simple variant on this method, with the goal of providing an easier way to estimate proper sample sizes. In this method, we use the sample size we need for a simple random sample, knowing that this sample size will be more than the minimum sample size needed if we were to use the SRS formula. Table 1 shows the minimum sample size needed, based on various population sizes with a margin of error of 5 percent at a 95 percent confidence level. These are the usual margin of error and confidence level percentages that are used within the social sciences and education fields: We suggest you use these in the tool we found (see www.checkmarket.com/sample-size-calculator/ ); we also provide a link to this tool in our online kit.

    From Table 1, we see that at minimum we need 382 (sample size) responses to our survey in order to adequately generalize those results to the entire population of 50,000 undergraduates. However, we know that we probably won’t get a 100 percent response rate from 382 students, so we need to collect more survey responses based on our historic response rates. At Rutgers, we tend to have a 20 percent response rate for surveys; so, in order to obtain the necessary 382 responses needed to generalize to the entire population, we would need to invite about 1,910 students to complete the survey.

    Table 1: The minimum sample size and number to invite for a margin of error of 5% at a 95% confidence level

    Population size Minimum sample required Number to invite based on a 20% response rate
    1,540 309 1,540
    2,000 323 1,615
    2,500 334 1,670
    5,000 357 1,785
    7,500 366 1,830
    10,000 370 1,850
    15,000 375 1,875
    20,000 372 1,885
    50,000 382 1,910

    Step 4) Select a random sample from each sub-group: We can select a random sample from each sub-group by using the proportional method. That is, we calculate the percentage of students that is in each sub-group with respect to the overall population of interest. For example, first-year students may make up 25 percent of our total population, sophomores 30 percent, juniors 25 percent, and seniors 20 percent. Therefore, the 1,910 survey recipients should breakdown into these same sub-group proportions. Table 2 illustrates the number of students we need to select from each sub-group based on the above information.

    Table 2: Number of students required from each population

    Academic class level Population percentage Number to select (based on a sample of 1,910)
    First year 25% 478*
    Sophomore 30% 573
    Junior 25% 478*
    Senior 20% 382

    *Rounded up, as you cannot have half a student.

    Now you know how many students should be part of your sample and what the break down of those students should be based on your representative sub-groups. However, you still need to randomly select the students within each group from your entire population of students. Selection from each sub-group is done randomly in order to avoid any selection bias. The simplest way to do this is by using the little known “RAND()” function in Microsoft Excel. This function generates a random number each time it is called. To select our sample of 1,910 students, we list all students within the population in one column of the Excel sheet then sort the entire population list by their sub-group classification (in our case this will be by the academic class level). Next, we use the “RAND()” Excel function: Type “=RAND()” in the column next to the first student, then copy that RAND() cell into the cell next to every student in your Excel sheet. (Note: You can view a sample of this procedure in the presentation located in our online tool kit.) Now, we re-sort our list based on academic class status and random number column, thus randomizing the list of students. Finally, we simply select from the first-year group the first 478 students, from the sophomore group we select the first 573, and so on until students have been selected for each sub-group. The students selected from each sub-group are the students who will receive the survey.

    Step 5) Calculate survey results and interpret your data: After collecting your survey results, you can calculate the percentage of responses within each of your sub-groups. In generalizing this to the overall population, we simply calculate the weighted average where the weights are the sub-group percentages. (See Table 2.) For example: Imagine we are interested in the percentage of students who would attend a career fair on a Friday. We find 75 percent of first-year students, 80 percent of sophomores, 60 percent of juniors, and 65 percent of seniors state they are in favor of a Friday career fair. The estimated population percentage in favor of having a career fair on Friday is found by multiplying the weights with the respective proportions. (See Table 3.)

    Table 3: Calculating stratified sample proportion

    Sub-group Population proportion (A) Proportion in favor of Friday
    career fair (B)
    A * B
    First year 0.25 0.75 0.1875
    Sophomore 0.30 0.80 0.2400
    Junior 0.25 0.60 0.1500
    Senior 0.20 0.65 0.1300
    Total     0.7075
    (sum of column)

    From Table 3, we see that approximately 71 percent of the undergraduate student population would be in favor of having a career fair on Friday, with a +/- 5 percent margin of error: (Note: In this case, margin of error means that when we generalize to the overall undergraduate population, that 71 percent could actually be as much as 76 percent or as low as 66 percent. Sometimes, you might get more or fewer responses from your sub-groups than planned, which would impact how large or small your margin of error is. But showing the actual calculations for this may lead you to shred this article, so we again refer to the tool within the tool kit to calculate your actual margin of error based on the actual number of responses you received. Because we use a stratified random sample, the true margin of error is usually less that 5 percent.)

    Surprise—we’re done! We hope you have overcome your fear of using the stratified random sampling method to collect survey data. If not, you can certainly partner with a graduate student or your university research department to assist you in this endeavor.

    Data Do Not Have to Be Scary

    As you can see, data do not have to be scary. Collecting data and not using it in your decision-making process should be the only thing that scares you. Simply using data collected through your career management platform can help you make all sorts of decisions, from planning workshops to starting an employer development plan to determining when to hold your career fairs. You don’t need a Ph.D. to fall in love with data or—most importantly—to use data to increase your effectiveness and make decisions.

    William JonesWilliam Jones is senior director of university career services for Rutgers University – New Brunswick. He provides senior executive leadership to the department, manages the Operations & Strategic Initiatives unit, and is a member of the senior leadership team. He works closely with the executive director of university career services and other members of the senior leadership team to ensure that the department has the resources necessary to serve Rutgers students. Prior to joining Rutgers, he was the associate director for external relations and information technology for the University Career Center and the President's Promise at the University of Maryland, College Park.

    Cedric HeadleyCedric Headley is the assistant director for outcomes and assessment in university career services at Rutgers University – New Brunswick. He is responsible for designing and implementing departmental research projects, evaluating the department’s programs and services, and analyzing and disseminating career services data for internal as well as external use. He works closely with the senior leadership team to ensure that the team has the information necessary for informed decision-making. Prior to joining Rutgers University, he was employed by Kean University as a data and program assistant in undergraduate admissions, and served as an adjunct instructor for the economics department.

  • Practicing Law Insitute
    PROFESSIONAL DEVELOPMENT
    NACE Professional Development

    NACE JOBWIRE