The Code4Lib Journal – What’s New? Deploying a Library New Titles Page with Minimal Programming Mission Editorial Committee Process and Structure Code4Lib Issue 35, 2017-01-30 What’s New? Deploying a Library New Titles Page with Minimal Programming With a new titles web page, a library has a place to show faculty, students, and staff the items they are purchasing for their community. However, many times heavy programing knowledge and/or a LAMP stack (Linux, Apache, MySQL, PHP) or APIs separate a library’s data from making a new titles web page a reality. Without IT staff, a new titles page can become nearly impossible or not worth the effort. Here we will demonstrate how a small liberal arts college took its acquisition data and combined it with a Google Sheet, HTML, and a little JavaScript to create a new titles web page that was dynamic and engaging to its users. by John Meyerhofer A new titles list is a great way for libraries to show off their new items, to demonstrate to faculty their requests are being purchased, and provide a tool for students to find new items at the library. In the physical library, a common tool for highlighting new titles is a book display while in the virtual world, a web page can achieve a similar result. However, many times lack of programming knowledge often prohibits libraries and librarians from implementing one. Intimidating descriptions of LAMP stacks, with PHP scripting and MySQL databases can often scare even the most technical librarian from attempting to build their own deployment. Currently most OPAC vendor systems don’t include viable options for deploying a new titles list either. Often times utilizing these programming tools means being proficient with extracting data from a catalog, importing that data into a database, and finally developing a web application to read, display, and maintain that data. Surprisingly this doesn’t have to be the case. At Macalester College, we were able to develop a relatively robust new titles web page with only some basic OPAC data, a Google Sheet, HTML, and a little JavaScript. In order to arrive at our end result, it’s important to look at the history of the previous new titles page and how it was structured and developed. We’ll also examine the reasons for moving to a simpler version. Finally, we’ll walk through the development of the new process and detail a typical procedure to update the new title page with new title data. Through these steps, we hope that many libraries will see this as a starting point to developing their own new titles list or re-examining their existing one. History At Macalester College, over the previous few years we had developed a new titles web page that used PHP to access new titles data in a MySQL database. This page was developed in-house and utilized a library server which was outside the traditional realm of the institutional servers and thus was lightly supported by Macalester’s Information Technology Services (ITS). This support meant there were backups of the server but no updates to the operating system or any of its applications. The existing process for updating the new item data involved first exporting data from OCLC’s WorldShare using the WMS Report Designer. Next, we took this data, which was in a tab-delimited file, and imported it into a MySQL database using a PHP script. This script did some text conversion as well as retrieved the new titles’ cover image. This process had worked well for the library during a one to two year timeframe with minor changes and enhancements. Historically the DeWitt Wallace Library doesn’t purchase a large amount of new materials. Typically we see about two hundred and fifty new titles in a month with an average of three to four thousand in a year. The number of titles each week fluctuates during the academic year with the spring having more than the other seasons. This fluctuation is of course a result of many factors but like many academic libraries, static budgets and falling circulation have affected purchasing. With these kind of numbers throughout the year, updates to the new title page traditionally occur every other week. Finally, new item records are given a two to three day window for processing to ensure items on the web page are actually in the library. Reasons for Change Although the existing new titles page was working and satisfied our needs, we began to re-assess it due to several factors. First, and most important, was that ITS had informed us our server’s operating system was no longer going to be supported and thus was slated for retirement. Without support, we could no longer depend on the web pages and applications that were on that server. Besides the operating system, individual software programs were extremely old, too. In particular, PHP and MySQL, main components in the existing new title list, were several versions behind current acceptable ones. Without having updates to the operating system and main software running the new titles page, the library and the college were exposed to security risks and vulnerabilities. With the impending retirement of our server, we needed a new home for the new titles page. Initially, there were many ideas entertained, including looking at another LAMP stack deployment or even a cloud based solution. However, to determine a more practical solution, we focused our criteria for our new titles page. First, ideally we wanted the new titles page to be integrated with our campus Content Management System (CMS). The existing new titles page, which lived outside our CMS, would have required extensive rework to make it match the campus template and be mobile ready. There are many benefits to integrating with our CMS (dotCMS). These include gaining the benefit of standard styling of the web page as well as instantly having the same look and feel of our existing library web page. Being a part of the CMS also gave us the security of backups, file versioning, and being under the campus’ umbrella of supported technologies and environments. We were also lucky in that our CMS allowed us to deploy pages with JavaScript. Our second criteria was a need for the new titles page to dynamically display data with pagination and filtering. These are features our users were accustomed to with the existing page. Third, we needed a place to store our new titles data that would allow us to quickly update and add items when new titles were purchased. In the past this was a MySQL database, but without this option, we wondered if there were other technologies that might be utilized that could satisfy, or at least mimic, a database’s functionality. Our final criteria was a need for the page to be fast, automatically scale with the addition of new item data, and be easy to maintain. Speed was paramount as most users won’t wait even a few seconds for a page to load. Scaling with the addition of data would be a benefit and ultimately help make the page easy to maintain. As with most projects that have any amount of system data, some manual work might be required, but keeping it to a minimum would help create a solution that was simple and easy to sustain. With all of these criteria combined, it was hard to imagine a solution that didn’t involve programming. Awesome Table Our first attempt, but ultimately a failed one, was to utilize the Awesome Table Google Sheet template. This was a fairly robust solution where data was housed in a Google Sheet and then included within a web page via an HTML iframe tag. The tool came with filtering, searching, and pagination and seemed like a perfect fit, except that during our testing typical load times for the page averaged 12 seconds. Figure 1. Awesome Table progress bar while loading (enlarge) While loading the data from the Google Sheet, the tool did display a small blue progress bar to show that it was processing, however, the 12-second delay was too long and we decided to look elsewhere for a solution. A Solution Presents Itself After working with the Awesome Table product in which a Google Sheet was used to store data, housing the new titles data in a Google Sheet seemed like a solution to many of our proposed criteria. Housed in a spreadsheet, the data could easily be updated and new rows added, therefore addressing our criteria of maintainability. However, as we discovered with Awesome Table (or even researching other tools like Tabletop.js), the Google Sheet API, as others have pointed out, may not be production worthy. There is concern that Google could put rate limits on access, may be adding extra load to the display, and with a publicly accessible Google Sheet, an ill-intentioned person could traceback the API calls to access the data. The epiphany in finding a solution came when realizing that the Google Sheet housing our data could be used as a tool to create the HTML display as well. If we create HTML code that could then be copied into a CMS-housed page, although not automated, it could be a simple process to update the new item data and thus populate our new items page. To achieve our vision, we needed a client-side tool that would create the pagination and filtering. A JavaScript tool seemed like it might be the best fit for our needs. Running locally meant it would likely be faster, and implementation would likely be easier as our CMS allowed us to implement JavaScript code. This line of thought led us to examine the List.js tool. List.js is a JavaScript file that can add searching, filtering, and sorting to an HTML item list or HTML table. Combined with the Pagination plugin, List.js ultimately satisfied our criteria of integration with our CMS, pagination, and filtering for our new titles page. After deciding to use the List.js tool, the development path became fairly straightforward. First we would need to create an HTML page with the List.js and Pagination plugin. Next, we would get each item’s book cover image, which was not part of our data extract. Third, we would need to concatenate all of the data into an HTML title list that could be part of an include file. Finally, we would include that file into the HTML page to dynamically generate our new titles list. In the end we would have a Google Sheet that houses all of the data, an HTML file that lives in our CMS which displays the data, and an include file that contains the HTML new titles list which is dynamically included into the main HTML file. In this way we would compartmentalize the data, display, and include file. Create the HTML File To start, we built an HTML page that formatted a basic set of test data which allowed us to fine tune the display. Utilizing our campus pattern library, we quickly created an HTML item list of eight items that had book cover images, each to the left of the tile and author. Next, we integrated the List.js and its Pagination plug-in into the page, allowing us to test the filtering by item type (i.e. Book, DVD, etc.) and purchase department (i.e. Chemistry, English, etc.). The filtering was accomplished by including these data values in the HTML with hidden tags. Even when hidden, List.js is able to use this data for filtering based on user choices. Final HTML file
The New Arrival page displays the new items from Macalester’s DeWitt Wallace Library for the 2016-17 academic year. Powered by List.js with images provided by OCLC.
? One thing to note is the “#dotParse(“/library/new/new_arrival_list.vtl”)” line of code which acts to include the separate HTML list items file. With this implementation, going forward to update the page with new items, we would only need to add new HTML list items to that file. Here is an example of a final HTML item list code for one new item:
The Mongol world empire, 1206-1370
The Mongol world empire, 1206-1370