Disable Spotlight indexing of network volumes


Based on a user request a question came up today regarding disabling the automatic Spotlight indexing of network volumes. While users can manually add local and network volumes to the Spotlight Privacy exception list this is not a very obvious process and it requires a tech to walk them through the process, usually more than once. When left unconfigured, Spotlight and its mds family of tools will continuously index network volumes, putting extra strain on network infrastructure and further degrading the already less-than-stellar SMB performance in OS X.

Not having a readily available solution available put me on the path of figuring out whether a more system-wide “killswitch” might be lurking somewhere. Most Mac Admins dealing with network volumes are likely aware of the setting to disable the creation of .DS_Store metadata on network volumes, but no similar setting to control network volume indexing was documented as far as I could tell.


To have mds ignore all external volumes including network volumes run the following command:

$ sudo defaults write /Library/Preferences/com.apple.SpotlightServer.plist ExternalVolumesIgnore -bool True

To be able to re-enable indexing for certain external volumes, run this command instead:

$ sudo defaults write /Library/Preferences/com.apple.SpotlightServer.plist ExternalVolumesDefaultOff -bool True

Diving in

Some poking around turned the attention to mds by way of /System/Library/LaunchDaemons/com.apple.metadata.mds.plist which invokes the mds binary at boot time. Next step was to open the executable in everyone’s favorite disassembler Hopper in order to see what we could see. I’m by no means a Hopper pro so I will typically perform some initial “Labels” or “Strings” searches for terms that are of interest. In this case I went with the generic “Preferences” query to get an idea of how and where mds might get and set its preferences. That initial search turned up some decent results as seen in Figure 1.

Figure 1
Figure 1 – “Preferences” search

Working in Hopper involves quite a bit of following rabbit holes down dead ends so I won’t bore the reader with the results of following each of these results into the decompiled code but eventually I came across a rather apropos block of pseudo-code courtesy of Hopper’s handy pseudo-code generator as seen in Figure 2.

Figure 2
Figure 2 – Pseudo-code

Highlighted are two preference keys that are called as CFPreferencesGetAppBooleanValue variables named ExternalVolumesIgnore and ExternalVolumesDefaultOff – exactly what we were hoping to find! Helpfully, this same block of code also contains some logging that describes exactly what the two keys do, leaving us less guesswork. The ExternalVolumesIgnore key logging string says:

“ExternalVolumesIgnore” is set. All external volumes (except backup) will be ignored

Similarly the logging string for ExternalVolumesDefaultIgnore states:

“ExternalVolumesDefaultOff” is set. All external volumes (except backup) will default off, override with mdutil -i on

Excellent. Those two keys would do very nicely in our quest to squelch the network-intensive file indexing by Spotlight. At this point it appears that the “Ignore” version of this preference concerning external volumes is the most irreversible and all-or-nothing one, while the “Default Off” version implies that the user will still be able to opt certain volumes back in should they wish to. For example, direct-attached storage devices like USB or Thunderbolt drives would not be indexed by default with the “Default Off” preference but the user would be able to enable them by running mdutil -i on /Volumes/MyExternalDisk from the command line.

Progress made

With a good selection of prospective preference keys in hand the next step was to figure out where mds and/or Spotlight actually look for them. As a Hopper apprentice I usually start a search for possible preference file locations with a string search for “.plist” as seen in Figure 3.

Figure 3
Figure 3 – “.plist” search

There are some good leads here, but it would be better to figure out exactly which of these plist files (or another one entirely) is where we’d want to write one of the two candidate keys to for testing. Through a little bit of crowdsourcing of opinions with the Mac Admins Slack team and especially the help from @frogor aka Mike Lynn in the #python channel I was able to determine the correct file. In an earlier search for “Preferences” I also found a reference to the kMDSPreferencesName variable which is used in direct relation to the ExternalVolumes* keys but I was unable to find a reference to the actual file it points to using Hopper. Calling upon Mike Lynn’s notorious (and some would say legendary) Python skills resulted in him whipping up this code which uses very clever PyObjc calls to determine the value of the variable kMDSPreferencesName – in this case /Library/Preferences/com.apple.SpotlightServer.plist. The exercise of finding this information inspired Mike to write up a more generic post on how to use the methods in this bit of code to get the kind of info I was looking for, so keep an eye on his blog for that.

Testing our findings

Back to our quest for the preference file to test our preference keys. Now that we know its name and location we can do some testing with it. By default /Library/Preferences/com.apple.SpotlightServer.plist does not exist, so we need to create it. The quickest way to do this is to use defaults on the command line. The following will simultaneously create /Library/Preferences/com.apple.SpotlightServer.plist and write the ExternalVolumesIgnore boolean “True” key:

$ sudo defaults write /Library/Preferences/com.apple.SpotlightServer.plist ExternalVolumesIgnore -bool True

Or if we want to let the user opt in some external volumes at a later time if desired:

$ sudo defaults write /Library/Preferences/com.apple.SpotlightServer.plist ExternalVolumesDefaultOff -bool True

Now when we mount a network volume we should see some logging in /var/log/system.log that indicates the volume is not being indexed:

9/15/15 10:14:08.486 PM mds[18428]: (Normal) DiskStore: "ExternalVolumesIgnore" is set.  All external volumes (except backup) will be ignored

Success! Now mds is ignoring this and all other external volumes entirely and not indexing them, exactly what we were looking for.

Conclusion & Download

To make testing this in other environments easier I am providing two profiles that set either the ExternalVolumesIgnore or ExternalVolumesDefaultOff key for deployment. Remember: test, test and then test again!

Sample configuration profile: ExternalVolumesDefaultOff
Sample configuration profile: ExternalVolumesIgnore

Read More

MDM-azing – setting up your own MDM server

Pardon the pun, but I’ve been meaning to use that Shamen reference ever since MDM became A Thing.

It is not for a lack of Mobile Device Management solutions that I wanted to figure out the process of setting up my own MDM server of course. Quite the opposite as there are many vendors out there offering MDM solutions with varying levels of customer satisfaction. It’s fair to say that often MDM solutions tend to nudge over to the far ends of the spectrum capped by “Overly complicated” on one end and “Checkbox feature” on the other. Barring a few free ones most are also pricey, further lowering one’s sense of getting bang for one’s buck. Another issue can be integration with existing systems, which often leads to companies deciding to buy into complete solutions from one vendor. Not a perfect situation by any stretch of the imagination for those of us who have perfectly functional management tools just looking to enhancement their toolset with the benefits of Apple’s OS X MDM integration. There have been previous flurries of interest and activity in the Mac Admin community around creating a true OSS MDM solution. These attempts mostly fizzled due to uncertainty about the exact process of creating an MDM service and lack of sources of information. After some asking around it was determined that Apple keeps certain key bits of information behind the iOS Enterprise Developer paywall, such as the Mobile Device Management Protocol Reference document. Even more importantly the ability to sign the required MDM CSR for such a service is also only available to organizations subscribing to the same $300/year program.

One particularly interesting project has been part of the MITRE Corporation’s Project iMAS, or iOS Mobile Application Security. Based on a 2011 presentation at the Black Hat conference by the Intrepidus Group named “Inside Apple MDM” and the sample code it contained, the Project iMAS folks have quietly worked out a very useable set of Python code that offers a reference MDM server ready to be built upon. This leaves the small matter of actually being able to use the code with Apple’s blessing. The folks at Project iMAS helpfully provide detailed Setup steps for preparing the needed certificates but like other sources its information on how to actually get Apple’s cooperation is pithy and not up to date with how Apple’s iOS Developer portal works in 2015. With that said, these are the steps I followed to successfully stand up a basic MDM server that works with APNS and Apple’s MDM support:

  1. Enroll in Apple’s iOS Enterprise Developer program or have yourself added to an existing subscription
  2. Contact Apple Developer support and ask for the “MDM CSR” option to be enabled in the Certificates section
  3. The Apple representative may tell you that in order to enable the option they have to get the approval of an iOS Enterprise account Agent or Admin. It’s a good idea to find out who this is (if not you) and give them a heads up about what you’re asking Apple for and that you’d like them to approve the request.
  4. Once the account Agent or Admin has approved the request a new option will be available in the “Certificates, Identifiers & Profiles” section of the Account page. As far as I have been able to determine this option is only available to an Admin user so if you are not an Admin for the account you’ll need the cooperation of someone who is in order to perform the CSR-signing step.
  5. Using Keychain Access (or openssl) generate a basic Certificate Signing Request (CSR) as outlined in the Project iMAS Setup instructions. Use the email address of a developer account that is part of the same iOS Enterprise account that you had the MDM CSR option enabled on. Use a Common Name of your choice (your organization’s name for example) and select to save it to disk. Name the CSR file “mdmvendor.csr”.
  6. Unless you are an Admin on the iOS Enterprise account you need to send the “mdmvendor.csr” file to an Admin user and have them select the “MDM CSR” option in Certificates, Identifiers & Profiles (Fig. 1)
  7. Submitting the CSR is very similar to how the Apple Push Certificates Portal works (Fig. 2). The Admin selects the “mdmvendor.csr” file for upload and goes through the submission steps. At the end of the process a signed certificate file is ready for download. The Admin user should save this file to disk and name it “mdmvendor.cer”. The Admin needs to send the “mdmvendor.cer” file back to you either by email or through some other method. You will not need their assistance after this.
  8. Before moving on, clone the mdm-server project to the system you are going to do the rest of the certificate preparation on as it contains a number of tools that simplify the process:
    $ git clone https://github.com/project-imas/mdm-server.git
    $ git submodule init
    $ git submodule update
  9. You can now continue to follow the steps starting at “3. Export MDM private key” in the Project iMAS README. One clarification is needed for this step: make sure to import the “mdmvendor.cer” file into the Login keychain in order to enable the saving of the private key as described.
  10. Once all the certificate preparations are completed the Server Setup section is next. You are free to go through these manual setup steps but in order to get up and running a bit faster I have created a Docker image that wraps it all up into an image that is ready to go. To start a container that has access to the prerequisite certificates and keys that we created we can bind-mount our local “mdm-server/server” directory in which they are located, using the -v source:destination flag:
    $ docker pull bruienne/mdm-server
    Pulling repository bruienne/mdm-server
    b7de3133ff98: Pulling dependent layers
    5cc9e91966f7: Pulling fs layer
    511136ea3c5a: Download complete
    $ docker run -d -v /Users/username/path/to/mdm-server/server:/mdm-server/server -p bruienne/mdm-server
    Starting Server
    Can't find MyApp.mobileprovision in current directory.
    Need both MyApp.ipa and Manifest.plist to enable InstallCustomApp.
  11. You should now be able to reach the server on the IP of the Docker host: https://<IP_OR_HOSTNAME>:8080/ On an iOS device you’ll see links to download the CA certificate and the enrollment profile (Fig. 3)

Figure 1
Figure 1 – MDM CSR option

Figure 2
Figure 2 – Upload CSR

Figure 3
Figure 3 – Open Source MDM Homepage

This should get the basic server going. Hopefully this will be helpful in getting some more knowledge and insight into the Apple MDM process that results in simple MDM server code that can be integrated in larger management frameworks. An example of possible integration is OpenMDM, another OSS MDM project that focuses on creating management profiles through a web interface. It’s not hard to imagine combining the two to create a full solution that offers the same features Apple’s Profile Manager does – without the OS X Server requirement (and hopefully also without some of its bugs). In a follow-up post I’ll talk about a few simple modifications to the server code that allow for the enrollment of OS X Yosemite and Mavericks devices as well. Since OS X management is not part of the focus of the iMAS project I have been working on adding some of that but more community input and hacking will definitely be welcomed.

Read More

BSDPy Redis caching

As we have been ramping up the BSDPy coverage in our environment it became clear that it was spending a lot of its time making API calls for clients that were checking in and not getting enough time to respond back to clients with boot acknowledgments. To offer some background to this problem the following are the steps that the BSDP client and server go through in the process of successfully Netbooting.

  • First, the client sends a broadcast DHCP INFORM LIST request with vendor-specific options that contain its make and model. e.g. "AAPL/BSDPC MacBookPro8,2".
  • Second, the server replies with a DHCP ACK LIST reply that once again contains vendor-specific options that contain a list of one or more images that the client is entitled to. This list also includes information designating the default image, their IDs and a name that is displayed in the client’s boot selector.
  • Next, the client decides either by itself which image it wants to boot from because the user held down the N-key which implies the default image, or the user decides which image to boot from by picking it in the boot picker at startup or using Startup Disk in OS X’s System Preferences. To indicate its choice the client sends a DHCP INFORM SELECT packet with, you guessed it, vendor-specific options containing the image ID.
  • Lastly the server replies back with a DHCP ACK SELECT packet that contains the server name to boot from, a TFTP URI to the booter (kernel) and (in another set of vendor-specific options) an HTTP or NFS URI to the NetBoot.dmg/NetInstall.dmg OS image.

In our case as traffic to the BSDPy hosts was exponentially increasing and they were now serving just about all of our campus, thousands upon thousands of clients were sending non-BSDP DHCP requests that were putting a load on the host in general (conntrack anyone?) while also flooding BSDPy with BSDP requests, further taxing them. Since the BSDP INFORM LIST requests were all being checked against an external API this was causing latency and excessive timeouts as the replies needed to be processed and then sent back out, over and over, taking into account the fact that the Apple BSDP client tends to hammer the network with BSDP requests. Actual boot requests (DHCP INFORM SELECT) were going unanswered which was leading to a reduced ability to boot on the client’s end. One might argue that this is a case where multi-threading might be able to help, but I wasn’t comfortable with the can of worms that subject opens. Instead I decided to see if this issue could be resolved by creating a short-term cache of frequently and/or recently checked in clients to cut down on more expensive and possibly slow API calls in general. Having already considered some form of caching in the past this seemed like as good a time as any to implement it and see if this would improve performance to more acceptable levels. Spoiler alert: it did.

All of the preceding serves as a long way of saying that as of today BSDPy’s pre-release branch includes the ability to automatically cache client requests in Redis. Redis is a popular RAM-based NoSQL key-value store that is the perfect choice for storing this kind of information: simple key-value pairs with easy to configure expirations. For each client request there are a few bits of data we’re interested in without the likelihood of the info being dramatically different each time: most of us don’t keep more than a few NBI sets to support a legacy OS X version, a current OS X version and sometimes a forked OS X build.

So, in key-value terms the client sends us:

  • Client MAC address
  • Client Model ID
  • Client IP address

The API returns one or more groups of the following keys and their corresponding values:

  • Image ID
  • Image name
  • Image priority
  • Booter path (TFTP)
  • Image path (NFS or HTTP)

Using Redis’ hash commands (AKA dictionary) this information can then easily be combined and stored by concatenating the client’s MAC address, Model ID and IP address into one “unique-enough” key with a number of hash entries:

# Create a key with our image entitlements

HSET 00:11:22:33:44:55_MacBookPro8,2_12.34.45.67 image_id 5000
HSET 00:11:22:33:44:55_MacBookPro8,2_12.34.45.67 image_name 'Yosemite 14D136'
HSET 00:11:22:33:44:55_MacBookPro8,2_12.34.45.67 image_priority 10
HSET 00:11:22:33:44:55_MacBookPro8,2_12.34.45.67 booter_path '/nbi/Yoyo.nbi/i386/booter'
HSET 00:11:22:33:44:55_MacBookPro8,2_12.34.45.67 image_path 'http://myhost.org/nbi/Yoyo.nbi/NI.nbi'

# Show all keys for this key

HKEYS 00:11:22:33:44:55_MacBookPro8,2_12.34.45.67
1) "image_id"
2) "image_name"
3) "image_priority"
4) "booter_path"
5) "image_path"

The key may look a little goofy but since we’re really only interested in holding on to what the API says is the correct Netboot information for this particular permutation of MAC address, Model ID and IP address it works fine. The reason we’re not just using the MAC address as our unique key (they are supposed to be unique, right?) is to account for the increasing number of Ethernet adapters techs have to use in order to NetBoot modern Macs. This means that the MAC address of one Ethernet adapter may very well be associated with dozens of different Mac models, some of which may have different NBI entitlements.

Since there’s no need to hold on to the data forever we can set a fairly short expiration on the key, 5 minutes seems to be a fine limit:

EXPIRE 00:11:22:33:44:55_MacBookPro8,2_12.34.45.67 300

We can always reset the timeout if a client checks in prior to the expiration to account for a machine that may be experiencing network issues of its own and therefor might take a few attempts to successfully boot. After a client has successfully NetBooted it is unlikely to check back in for a while so after 5 minutes its Redis key will be removed.

Enough with the gory technical details you say? Awesome. The way to have BSDPy automatically enable the Redis caching is easiest if you are already running it in a Docker container since Docker lets us use container linking to hook up Redis and have BSDPy start caching. Steps are as follows:

  1. Pull the official Redis Docker image from the Docker Hub:
    docker pull redis
  2. Run the Redis image with default settings (no need to expose ports):
    docker run -d --name redis redis
  3. Run the BSDPy Docker image, linking it to the running Redis container:
    docker run -d --name bsdpy <MORE OPTIONS> --link redis:db bruienne/bsdpy:1.0
  4. There is no step four.

The only important configuration item here is the --link redis:db flag as it tells Docker the name of the container to link to (redis) and what to name the link inside the BSDPy container (db). BSDPy looks for environment variables automatically created by Docker that all start with "DB_" to determine whether Redis was linked and thus should be used for caching. Linking Redis with any other name will cause BSDPy to ignore it and no caching will occur. When Redis caching was successfully activated the log’s startup notices will look something like this:

06/03/2015 05:49:30 PM - DEBUG: ------- Start Docker env vars -------
06/03/2015 05:49:30 PM - DEBUG: BSDPY_NBI_PATH: /nbi
06/03/2015 05:49:30 PM - DEBUG: BSDPY_IFACE: eth0
06/03/2015 05:49:30 PM - DEBUG: BSDPY_API_URL: https://myapphost.org/api/v1/netboot_images
06/03/2015 05:49:30 PM - DEBUG: BSDPY_PROTO: http
06/03/2015 05:49:30 PM - DEBUG: BSDPY_IP:
06/03/2015 05:49:30 PM - DEBUG: -------  End Docker env vars  -------
06/03/2015 05:49:30 PM - DEBUG: Using Redis caching for clients with Redis host on port 6379
06/03/2015 05:49:30 PM - DEBUG: tftprootpath is /nbi
06/03/2015 05:49:30 PM - INFO: Server priority: [251, 71]
06/03/2015 05:49:30 PM - DEBUG: Found $BSDPY_IP - using custom external IP
06/03/2015 05:49:30 PM - INFO: Server IP:
06/03/2015 05:49:30 PM - INFO: Server FQDN:
06/03/2015 05:49:30 PM - INFO: Serving on eth0
06/03/2015 05:49:30 PM - INFO: Using http to serve boot image

For those not using Docker the environment variables to set to a valid Redis host and port are DB_PORT_6379_TCP_ADDR and DB_PORT_6379_TCP_ADDR – either IP or FQDN for the host variable will work.

And that is all. Try it out, provide feedback, send pull requests!

Read More

Conference Season 2015

With Spring on its way into Summer so does approach the conference season. The 2015 season promises to be busier than ever, so busy in fact that Mac Admins are having to make decisions on which conferences to skip this year because there’s so many to choose from. Exception to the rule is probably WWDC, for which one needs to have won the ticket lottery to attend. If you entered and won a ticket, I sure hope you intend on going. For those not so lucky there’s plenty of other events to meet up with fellow Mac Admins and discuss all the wonderful surprises Apple has in store for future hardware and software releases.

Some of the highlights:

ACEs Conference, New Orleans, LA – May 20-21
WWDC 2015, San Francisco, CA – June 8-12
MacDeployment 2015, Calgary, AB – June 18 
Mac Devops YVR, Vancouver, BC – June 19
Penn State Mac Admins Conference
, State College, PA – July 7-10
MacIT Conference, Santa Clara, CA – July 14-16
Mac SysAdmin 2015, Gothenburg, Sweden – September 29 – October 2
JAMF Nation User Conference, Minneapolis, MN – October 13-15
MacTech Conference, Los Angeles, CA – November 4-6

I’m fortunate enough to be speaking at two of these events, namely the Penn State Mac Admins Conference and Mac Sysadmin 2015. I have two talks lined up for PSU and one for Mac Sysadmin:

Free your NetBoot server with BSDPy – Penn State Mac Admins
Connect the dots with Docker – Penn State Mac Admins (Joint session with Nick McSpadden)
Practical Docker for Mac Sysadmins – Mac Sysadmin 2015

I hope to see some of you this summer at one of these events. Even if you can’t make it to State College or Gothenburg I hope you’ll consider some of the other events as they’re all great ways to meet some of your fellow Mac Admins and Learn New Stuff!

Read More

Adding Python or Ruby to custom NetInstall images with AutoNBI

A recent update to AutoNBI, a tool I wrote to automate the creation of custom Apple NetInstall images (NBIs), expands its customization abilities. So far an admin has been able to essentially forklift a custom folder into the NBI, as explained quite nicely by Graham Gilbert in recent blog posts here and here. The immediate use for this is to replace the “Packages” folder on a standard NetInstall volume with one that has been prepped with a custom rc.imaging file and additional custom tools meant to be run at boot time such as a lightweight disk imaging or no-imaging tool. This feature works for applications that are fully self-contained like a compiled Cocoa app, but is not as useful if the application has a dependency on frameworks like Python or Ruby which are not part of the default NetInstall environment. The updated version of AutoNBI now offers the option to include either one or both of the Python or Ruby frameworks into the NetInstall BaseSystem.dmg allowing custom scripts written in either language to be run. The first tool to leverage this ability is Graham Gilbert’s very promising Imagr tool which is written in PyObjc and thus relies on the availability of the Python framework in /System/Library/Frameworks.

I’m looking at including other potentially useful add-ons such as VNC or SSH while sticking to the overall goal of keeping the boot environment lightweight in order to provide short boot times and minimal network load.

A special word of thanks goes to every Mac Admin’s favorite Python whisperer Michael Lynn for figuring out how to parse Apple’s custom wrapper around OS X Yosemite installer sources without which it would have been nearly impossible to add these new features.

The current main branch contains the changes, so go check it out! Updated instructions can be found in the Readme on the Bitbucket repository.

Read More

Box cutting, or how I stumbled onto a serious security flaw in Box Sync for Mac

TL;DR – Update to Box Sync for Mac 4.0.6035 immediately. The app exposes several sensitive bits of data like API keys, internal user IDs, URLs and passwords. Read on for details.

The trouble with Box Sync

Recently I revisited the convoluted mess that is the Box Sync application for Mac. If you are a Mac Admin in charge of even a small deployment environment you probably know how tedious it is to deploy the Box Sync application and manage its settings. Its only deployment method is an application bundle, which would be fine if it behaved like a normal drag and drop application: to deploy it your mass-deployment simply copies the application to /Applications, uses a profile or MCX to configure settings published by the vendor and all is well. Not so with Box Sync. Box offers instructions on how to deploy Box Sync for large scale clients which require the Mac Admin does the following:

  • Copy the Box Sync application to /Applications
  • Copy /Applications/Box Sync.app/Contents/Resources/com.box.sync.bootstrapper to /Library/PrivilegedHelperTools
  • Run sudo /Applications/Box Sync.app/Contents/Resources/com.box.sync.bootstrapper which performs first run setup

To many Mac Admins it will occur that these steps would be better handled by using a standard Apple PKG, and they would be correct. Performing manual copy operations followed by running commands to finalize the installation by hand doesn’t scale very well. Apple installer packages are rather good at placing files in specific filesystem locations with specific ownerships and permissions (step one and two in the Box Sync install process) and afterwards running post-install commands to further configure the application for use by the end user (step three). Further complications arise once the Box Sync application is installed because of its default behavior to automatically update in the background. This is fine for home users but in a large deployment where an admin maintains a workflow where all software updates flow through “testing”, “QA” and “production” stages this is problematic. Very little information is available from Box regarding changing the application’s default preferences, nor does the application’s Settings tab offer much:
In our environment we follow the aforementioned testing/QA/production workflow so having the Box Sync client update itself without allowing us to ensure its compatibility within our environment was a problem. Lacking any documentation from Box and on a hunch I took it upon myself to check out if there was any configuration information buried inside the Box Sync bundle.

Time to go spelunking

At first glance the app bundle contents seem pretty unassuming:
Nothing too interesting in Frameworks either, just Python and Growl. The bundled Python framework makes sense (though, what’s wrong with System Python?) since we know that the Box service, like Dropbox’s, is partly written in Python:
The Resources folder has a lot of PNG images used as icons for document types and the UI, as well as a number of .nib files so we’ll ignore those. We see some more signs of Box being written in Python:
Let’s check out the include and lib folders, familiar-looking names for anyone who has had to install a Python application that has Python modules as dependencies:
It appears that the include folder contains just the pyconfig.h header file, so we’ll look at the lib folder which seems to have more contents:
The most interesting item here is the site-packages.zip file which is another familiar name to those who have deployed Python applications before. OS X itself has a folder named site-packages located inside /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7. Inside are third party Python modules that can be installed through easy_install or pip and are meant to be available to all Python applications. Let’s see what’s inside, shall we? The unzipped site-packages folder is over 20 MB in size, containing 51 subfolders and nearly 140 .pyo files. The .pyo files are Python optimized bytecode files, essentially just like the .pyc files Python automatically creates when running a Python .py program file. More about these in a bit as we dive into the contents of the folders next.

Further down the rabbit hole we go

Most of the folders contain commonly used Python modules one would expect to see in an app of Box Sync’s magnitude: a number of PyObjc modules to call Apple frameworks, XML, SSL, JSON, SQLite, NTLM modules and various other modules that don’t seem too interesting. What does look promising are the box and boxsdk folders:
Looking at the contents of the box folder our eye is drawn to the conf subfolder since we’re still hunting for clues about somehow configuring the application a little better. At this point we’ll look at the Python optimized bytecode filetype again that we’ve determined all the .pyo files have. A quick Google search of “Python pyo files” tells us that they are trivial to revert back to regular Python code using the uncompyle2 tool which is available through either easy_install or pip. Once installed we run uncompyle2 against the contents of the entire site-packages folder using the -r switch to recursively process it. If there’s other places where interesting configuration options may be lurking, we’re bound to find them.

$ easy_install uncompyle2
$ uncompyle2 -o /tmp/site-packages-decomp -r -p 8 /Applications/Box Sync.app/Contents/Resources/lib/site-packages
# 2015.02.07 01:36:54 EST
decompiled 1 files: 1 okay, 0 failed, 0 verify failed
decompiled 1 files: 1 okay, 0 failed, 0 verify failed
decompiled 1 files: 1 okay, 0 failed, 0 verify failed
decompiled 1 files: 1 okay, 0 failed, 0 verify failed
decompiled 1 files: 1 okay, 0 failed, 0 verify failed
decompiled 1 files: 1 okay, 0 failed, 0 verify failed

After the processing is done we check out the contents of /tmp/site-packages-decomp and see that uncompyle2 produced files with a .pyo_dis extension, which are just plain .py files at this point. We could rename them all, but any text editor will read them now so we won’t bother.

Deep dive for plists

Opening up the /tmp/site-packages-decomp folder in an app like Textmate or BBEdit is going to be the easiest way to search the entire codebase for anything related to preferences, like a plist file. To start off we’ll use Textmate 2 to search for any files containing the text “plist”:
That wasn’t too hard, was it? It looks like the file at site-packages/box/conf/base.pyo_dis has references to a file named /Library/Preferences/com.box.sync.plist:

conf.set(u'preferences.mac_plist_file.path', u'/Library/Preferences/com.box.sync.plist')

Eureka! This file is normally not to be found on systems with Box Sync installed and configured for the user, so this is a great start. Some more searching reveals the _overridable_settings list in configuration.py which contains a key named auto_update.enabled! Exactly the kind of setting we were looking for. Its inclusion in a list of settings named “overridable settings” further increases our confidence. To test that this setting actually works we start by writing a new /Library/Preferences/com.box.sync.plist file using defaults like so:

$ sudo defaults write /Library/Preferences/com.box.sync.plist auto_update.enabled -bool False

And indeed, when tested by installing an older version of Box Sync with the /Library/Preferences/com.box.sync.plist preference file in place with the auto_update.enabled key set to False the application indeed no longer attempts to auto-update. Mission accomplished, go home? Well… almost.

Things get kinda real at this point.

Once I started to scroll around a bit more in the base.py and auto_update_release.pyo files and saw what else was in it, my reaction can be summarized as follows:


As it turns out, the development team at Box embedded a lot of rather sensitive information in the files belonging to the conf module. A quick scan reveals sensitive-looking key/value pairs such as:

api_key #(Mac/Windows specific)
client_secret #(Mac/Windows specific)

This is probably not a good thing. Since bots exist that scan Github and other public version control services for unintentionally checked in API keys and secrets, Box probably didn’t mean to expose all this information in the Box Sync application. To be clear, I did not try to use any of the information I found to gain access to any Box systems. I am also not publishing the full source code or complete key/value pairs, this is left up to the inquisitive reader to pursue. In early January, after realizing what I had found, I reported the issue to the Box Security team and after receiving acknowledgment of the severity of the issue I was asked to delay disclosure to give the Box development team time to develop and ship a fix. On February 6th I was notified that an updated version 4.0.6035 had been released which is supposed to resolve the issue. Since the update is now available I am publishing my findings in order to give a heads up to fellow Mac Admins and anyone else who uses or deploys Box Sync to ensure that the 4.0.6035 update is applied ASAP. There is no way of knowing who else has been aware of the exposed information before me and whether or not it may have been used to access Box customer data. This is especially important in environments that use a managed software update workflow which may be holding back automatic updates until specific action is taken by an admin.


I hope this information will be useful to Mac Admins and individual Mac users alike and again stress that every Box Sync user make sure that their installed version is at version 4.0.6035 or above.

Read More

Enable Google two-factor authentication for SSH connections on OS X

Note: this post was updated with additional security concerns regarding Git and the method of installing the required tools needed for compiling the PAM module. Thanks to @marczak and @Magervalp for the feedback.

Two-factor authentication (2FA) is fairly mainstream these days, which is a good thing. It would be nifty if Mac Admins could add the increased security 2FA offers to remote (SSH) logins on OS X. There are existing commercial solutions like Duo Security (a local Ann Arbor business I heartily endorse) that offer tools to accomplish this but if you are already using Google Authenticator for other services it may make sense to use that instead. As part of the Google Authenticator open source code Google provides a PAM module which, with some effort, can be compiled and configured for use with OS X’s own PAM system. In order to compile the GA PAM module the Xcode CLI tools are required as well as automake and autoconf. The easiest way to install the latter two is either through Homebrew, a popular OS X package manager or using ready-made PKG installers from the Rudix project.


In order to prepare the required tools follow these steps. First, we’ll need the Xcode command line tools:

$ xcode-select --install
xcode-select: note: install requested for command line developer tools

This will prompt the user to install the command line developer tools.

Before we continue a quick note regarding the Git client that ships with OS X – this is a post about security after all. A few weeks ago it was announced that all shipping Git clients had a serious security issue on case-insensitive filesystems that could allow for malicious repositories to overwrite the .git/config file and cause arbitrary command execution. Apple shipped a patch for the issue with Xcode 6.2 beta 3 which I would strongly suggest downloading from Apple’s Developer site and installing.

All that is left now is to install automake and autoconf which are the only required tools that do not ship with Xcode. As noted by one commenter it was necessary for him to also install libtool. I’ve added it to the list for reference, it may or may not be needed for everyone but won’t hurt to install alongside the other two. If you are a current Homebrew user all you should have to do is:

$ brew install autoconf automake libtool

Or, if you use Rudix as your package manager it should be as simple as:

$ sudo rudix install automake autoconf libtool

If you would like to use either Homebrew or Rudix package managers but don’t have them installed yet you must do so first. As noted by Ed Marczak the recommended installation method for both Homebrew and Rudix involves directly piping and executing code from the Internet, in good faith. I agree with him that this is not necessarily a habit you want to get too comfortable with. It takes as little as one line of code inserted either accidentally or maliciously to cause data loss, install malware and so on. I’m not implying that either of these tools will, but other less scrupulous persons may take advantage of the trust you previously put into legitimate install processes. I recommend that you examine the code executed by any pipe-curl-to-interpreter install like Homebrew or Rudix beforehand. Both Homebrew and Rudix have Github repositories.

If you just want to install the required tools without the added weight of a packaging tool you can opt to install the self-contained PKG installers for automake, autoconf and libtool provided by the Rudix project. If you decide to use the Rudix PKG installers I recommend that you examine them using something like Pacifist prior to installation. Pacifist is by far the best OS X package inspection tool and you should consider paying for a license. On with the show, shall we?

Installing Homebrew, followed by an installation of automake, autconf and libtool:

$ ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
$ brew install autoconf automake libtool

Installing Rudix package manager, followed by an installation of automake, autoconf and libtool:

$ curl -s https://raw.githubusercontent.com/rudix-mac/rpm/2014.10/rudix.py | sudo python - install rudix
$sudo rudix install automake autoconf libtool

Installing automake and autoconf using Rudix PKG installers:

Download the automake PKG installer (10.6-10.10)
Download the autoconf PKG installer (10.6-10.10)
Download the libtool PKG installer (10.6-10.10)

Installation and configuration

With the prerequisites out of the way, compiling the PAM module should now go smoothly:

$ git clone https://github.com/google/google-authenticator.git
$ cd google-authenticator/libpam
$ autoreconf -ivf
$ automake --add-missing
$ ./configure
$ sudo make install
$ sudo cp /usr/local/lib/security/pam_google_authenticator.so /usr/lib/pam/
$ sudo vi /etc/pam.d/sshd

The last command above opens up the SSH daemon PAM configuration file in vim, where we will add the following line:

auth required pam_google_authenticator.so nullok

Adding the line makes the Google Authenticator PAM module required for all authentication requests. This means that in order to perform a successful SSH login the remote user must provide both their account password and a one-time code generated by Google Authenticator or other compatible 2FA app. Note the ‘nullok’ option which will cause the Google Authenticator module to be skipped for users who have not yet been setup using the google-authenticator tool, which we will discuss next.

Setting up users for two-factor authentication

As part of the ‘make install’ process an executable was installed to /usr/local/bin/google-authenticator which is used to set up a user for GA authentication. Running google-authenticator without any options will prompt the user to select the type of token to create (HOTP or TOTP) and a few other additional security options. Running it with the -h flag will display the full usage:

$ google-authenticator -h
google-authenticator [<options>]
 -h, --help               Print this message
 -c, --counter-based      Set up counter-based (HOTP) verification
 -t, --time-based         Set up time-based (TOTP) verification
 -d, --disallow-reuse     Disallow reuse of previously used TOTP tokens
 -D, --allow-reuse        Allow reuse of previously used TOTP tokens
 -f, --force              Write file without first confirming with user
 -l, --label=<label>      Override the default label in "otpauth://" URL
 -i, --issuer=<issuer>    Override the default issuer in "otpauth://" URL
 -q, --quiet              Quiet mode
 -Q, --qr-mode={NONE,ANSI,UTF8}
 -r, --rate-limit=N       Limit logins to N per every M seconds
 -R, --rate-time=M        Limit logins to N per every M seconds
 -u, --no-rate-limit      Disable rate-limiting
 -s, --secret=<file>      Specify a non-standard file location
 -w, --window-size=W      Set window of concurrently valid codes
 -W, --minimal-window     Disable window of concurrently valid codes

We will use option flags to perform a non-interactive configuration, the output of which is shown below. The options we’re using are -t (create a TOTP token, the more secure option), -d (disallow reuse), -r 3 (number of logins per time window), -R 30 (duration of time window), -w 90 (token validity window) and -f (force writing configuration to ~/.google_authenticator).

$ google-authenticator  -t -d -r 1 -R 30 -w 90 -f

Your new secret key is: SECRET_KEY
Your verification code is VERIFICATION_CODE
Your emergency scratch codes are:

The output contains a few important bits of data. The first bit is the google.com URL which is a link to a QR code used to add the token for your user and host to Google Authenticator or other compatible app. Open the URL by command-clicking it in Terminal.app which should open your default web browser and show a QR code, ready to be added to a 2FA app. Instructions on how to add a token to Google Authenticator or Authy using QR codes are here:

Adding a new token to Google Authenticator
Adding a new token to Authy

The second bit (or bits) of info are the five emergency scratch codes which can be used as one-time emergency codes in case you lose access to your 2FA application. It is a good idea to store these emergency codes someplace safe.

With the setup of the Google Authenticator PAM module and configuring of our 2FA app out of the way we can now attempt a Google Authenticator 2FA-enabled SSH login:

$ ssh demo@localhost

Verification code: 
Last login: Tue Jan  6 23:05:58 2015 from localhost
myhost:~ demo$

Success! As seen above the SSH login process first prompts for the regular user password and then prompts for a verification code. The six-digit code is retrieved from the 2FA app we added our token to and once entered at the prompt it is accepted and login is complete. Huzzah!

Even though this post describes how to enable Google Authenticator 2FA for SSH on OS X it should work much the same for non-OS X hosts. The README found on the Github repository contains further detailed information on configuration as well.

Read More

MacTech Conference 2014 Docker slides are up

I spoke at MacTech Conference 2014 about Docker earlier this week, the slides for which are now up at https://db.tt/mSWzOHnb

In the talk I cover Docker and application containerization specific to Mac admins. The content is purposely not an all-encompassing dive into Docker but aims to introduce Mac admins to the concept of containerization and how it makes their lives easier deploying Mac management-centric services.

Thanks to everyone who showed up and asked questions during my talk. The MacTech Conference organization usually also makes the session videos available, for a fee. I am not involved in the sale of the videos so check out the Conference video page after the Conference to find out more.

Read More

Creating a signed Java Deployment Rule Set with Windows Server CA


With the release of Oracle’s Java 7 Update 51 came heightened security measures that affect unsigned and self-signed Java applets. At its standard “High” security setting the Java web plugin and standalone JVM will refuse to run unsigned or self-signed applets unless they have been explicitly added to a user-level whitelist which is a newly added security feature in Java 7 Update 51.
To allow large organizations to better manage security for their users Oracle previously introduced the Deployment Rule Set feature in Java 7 Update 40. The Deployment Rule Set consists of a single signed JAR file named “DeploymentRuleSet.jar” deployed in the Java system path “/Library/Application Support/Oracle/Java/Deployment”. Given the new security measures in Java 7 Update 51 it is a good time to start using a Deployment Rule Set since it provides:

  • The ability to use wildcard exception rules, unlike the user exception site list (https://mydomain.myorg.com/*)
  • No Java security warnings when accessing a whitelisted Java applet, unlike the user exception site list
  • Easy system-wide installation and updating of the ruleset

This post deals with a common scenario for Mac admins: you’re in an established Windows Server Active Directory environment that offers Certificate Authority services. Clients may already have your domain’s CA in their trusted cert store so extending this to sign a Java Deployment Rule Set JAR may make sense. The process of deploying the DeploymentRuleSet.jar file is outside the scope of this article although I did include a postinstall script as an addendum to assist with the installation of the signed certificate chain that this article will help you create. With that said, let’s get underway.

The process

Generate a new keystore and key

To perform the various key requests and code signing operations, a way to keep it simple is to create a fresh Java keystore file using the same password as the Java default JKS password. You’re free to play around with the -keyalg and -keysize settings as needed.

$ keytool -genkey -alias mykey -keyalg RSA -keysize 2048 -keystore keystore.jks -storepass changeit

Generate a Certificate Signing Request

In order to verify and sign the code signing certificate the Windows CA is going to need a certificate signing request (CSR) to process. This command creates one based on the key we generated in the previous step.

$ keytool -certreq -alias mykey -file csr.csr -keystore keystore.jks -storepass changeit

Extract the private key from the keystore

To submit a signing request, we’ll need the private key as well as the public one. The easiest way to get the private key out of an existing keystore is to import the keystore into a newly-created keystore, selecting only the key we are interested in and storing it as PKCS#12. The Windows tool we’ll use later can process PKCS#12 keys so we don’t need to do any further conversion.

$ keytool -v -importkeystore -srckeystore keystore.jks -srcalias mykey -destkeystore myp12file.p12 -deststoretype PKCS12

Rename private key file

Windows Server likes certain things to be a certain way and dealing with certificates is no different, so we must rename our PKCS#12 file to have a .pfx extension to allow certreq.exe to play nice.

$ mv myp12file.p12 myp12file.pfx

Sign the CSR using Windows Server

The files you will need to process the signing request are “mykey.csr”, “mykey.cer” and “myp12file.pfx”. Sign your generated signing request using a user account that has Read and Enroll rights to a template configured for code signing on the Windows Server CA. In the example here we’re using a template named “MyCodeSigningTemplate”. See here for more info on how to create a code signing Certificate Template with Windows Server: http://technet.microsoft.com/en-us/library/cc730826(v=ws.10).aspx

Allow certreq.exe to overwrite mykey.cer and mykey.csr when prompted by the “certreq” command.

C:\Windows\system32&amp;gt;certreq -submit -attrib &amp;quot;CertificateTemplate:MyCodeSigningTemplate&amp;quot; mykey.csr mykey.cer myp12file.pfx

Import signed key and CA into keystore

We need to add the signed key and signing CA (and any intermediate CA certs) back into our keystore so we can use it for code signing.

$ keytool -importcert -keystore keystore.jks -file ca-certificate.pem -alias CARoot -storepass changeit
Certificate was added to keystore
$ keytool –importcert –keystore keystore.jks –file mykey.cer –alias mykey -storepass changeit -trustcacerts
Certificate reply was installed in keystore

Create DeploymentRuleSet.jar and sign it with the newly signed key

Now we can get down to the business of creating a .jar file and signing it with our shiny new key. We stash ruleset.xml into a JAR using the “jar” command. Next, we use “jarsigner” to sign “DeploymentRuleSet.jar” with our key which is retrieved from our Java keystore using the “mykey” alias.

$ jar -cvf DeploymentRuleSet.jar ruleset.xml
added manifest
adding: ruleset.xml(in = 266) (out= 225)(deflated 15%)
$ jarsigner -keystore keystore.jks -storepass changeit DeploymentRuleSet.jar mykey

Combine signing key and associated root CA certificates

In order to easily distribute our public signing cert as well as those of our CA and any intermediate CAs they should be concatenated into one single file. The order to concatenate them in is CA -> Intermediate -> (Optional intermediates) -> mykey.pem.

$ cat rootCA.pem (intermediateCert1.pem, intermediateCert2.pem) mykey.pem &amp;gt; mychain.pem

Import mychain.pem into Java keystore on client(s)

In order for the Java browser plugin to accept our Deployment Rule Set without complaining, we need to add its code signing key public key certificate to the Java keystore.
The Java home path for the browser plugin is different from system Java so we need to import our certificate chain into the browser plugin-specific keystore located at “/Library/Internet Plug-Ins/Contents/Home/lib/security/cacerts”. To make the certificate chain available to standalone Java applications as well it must be imported into the system Java keystore at “/Library/Java/Home/lib/security/cacerts”.

Both keystores use the same default password: “changeit”. For enhanced security, it may be a good idea to change the password for the individual keystores to a new one after importing the certificate chain. This is optional, but a security note worth mentioning.

$ keytool -importcert -keystore /Library/Internet\ Plug-Ins/Contents/Home/lib/security/cacerts -storepass changeit -alias mykey -file mychain.pem -noprompt
$ keytool -importcert -keystore /Library/Java/Home/lib/security/cacerts -storepass changeit -alias mykey -file mychain.pem -noprompt

Testing the Deployment Rule Set

Excelsior! We should now be able to place our DeploymentRuleSet.jar file into its designated path and load a web page we whitelisted in the ruleset.xml file. If all went well the Java application on the web page will load without any warnings from the JVM about the DRS using an untrusted self-signed certificate or the application being blocked because of its unsigned or self-signed status. You can verify the presence of an active Deployment Rule Set by navigating to the “Java” preference pane in System Preferences and clicking the “Security” tab. If active, the tab will contain a line of blue text that says “View the active Deployment Rule Set”. The blue text can be clicked to view the current rule set in a new window. The Deployment Rule Set will also allows inspection of the signing certificate and its associated root certificates. These should match the code signing key’s certificate and any root and intermediary CA certificates used in our previous steps.

Java 7u51 Security tab

Java 7u51 Deployment Rule Set

Java 7u51 DRS Certificates
Java 7u51 DRS Certificates


Hopefully this will help a few Mac admins with deploying a self-signed DeploymentRuleSet.jar file using their organization’s local CA. If you have questions or comments leave them at the end of this post, find me on Twitter or on Freenode IRC in ##osx-server.

Addendum: Package postflight

To make distribution of the signing certificate chain a little easier I’ve included a postinstall script that can be added to an installer package. The postinstall script will check the browser plugin and system keystores for the presence of the “my_chain” alias and if it is not found in either one of the keystores it adds the certificate chain. The script expects the installer to drop “my_chain.pem” into /tmp and it securely removes the file after completion of the script. You are free to change file names and aliases as needed.


# Check Java Plugin and System keystores for existence of the signing cert.
#   If found, we skip installation and report success. If not found, tag the
#   keystore as needing installation and proceed with installation. Check result
#   of installation afterwards and log the result for reporting later.

# Executable and keystore file statics
KEYTOOL_LIST=&amp;quot;/usr/bin/keytool -list -storepass changeit -keystore &amp;quot;
KEYTOOL_IMPORT=&amp;quot;/usr/bin/keytool -importcert -storepass changeit -trustcacerts -file /tmp/my_chain.pem -alias my_chain -noprompt -keystore &amp;quot;
JAVA_PLUGIN=&amp;quot;/Library/Internet Plug-Ins/JavaAppletPlugin.plugin/Contents/Home/lib/security/cacerts&amp;quot;
LOGGER=&amp;quot;/usr/bin/logger -t JAVADRSINSTALL&amp;quot;
REMOVE_PEM=&amp;quot;/usr/bin/srm /tmp/my_chain.pem 2&amp;gt;&amp;amp;1 &amp;gt;/dev/null&amp;quot;

# Initialize reporting variables. I like string comparisons, deal with it.

# Check whether the signing cert is installed in the Java plugin keystore
if ! `${KEYTOOL_LIST} &amp;quot;${JAVA_PLUGIN}&amp;quot; | grep my_chain 2&amp;gt;&amp;amp;1 &amp;gt;/dev/null`; then
    ${LOGGER} &amp;quot;Cert chain for Java DRS must be installed in Java Plugin.&amp;quot;
# Check whether the signing cert is installed in the System Java keystore
if ! `${KEYTOOL_LIST} &amp;quot;${JAVA_SYSTEM}&amp;quot; | grep my_chain 2&amp;gt;&amp;amp;1 &amp;gt;/dev/null`; then
    ${LOGGER} &amp;quot;Cert chain for Java DRS must be installed in System Java Home.&amp;quot;

# If we didn't find the signing key in the keystores we need to install them.

# Install into Java plugin keystore
if [[ $keystore_plugin == '1' ]]; then
    echo $keystore_status
    ${LOGGER} &amp;quot;Installing cert chain for Java DRS into JavaAppletPlugin&amp;quot;
    ${KEYTOOL_IMPORT} &amp;quot;${JAVA_PLUGIN}&amp;quot; 2&amp;gt;&amp;amp;1 &amp;gt;/dev/null

    # Check whether our signing key is now in the keystore
    if `${KEYTOOL_LIST} &amp;quot;${JAVA_PLUGIN}&amp;quot; | grep my_chain 2&amp;gt;&amp;amp;1 &amp;gt;/dev/null`; then

# Install into System Java keystore
if [[ $keystore_system == '1' ]]; then
    ${LOGGER} &amp;quot;Installing cert chain for Java DRS into System Java Home&amp;quot;
    ${KEYTOOL_IMPORT} &amp;quot;${JAVA_SYSTEM}&amp;quot; 2&amp;gt;&amp;amp;1 &amp;gt;/dev/null

    # Check whether our signing key is now in the keystore
    if `${KEYTOOL_LIST} &amp;quot;${JAVA_SYSTEM}&amp;quot;  | grep my_chain 2&amp;gt;&amp;amp;1 &amp;gt;/dev/null`; then

# Report on status of installs, log any failures and securely remove our key

# No installation needed for either keystore, report it
if [[ ($keystore_plugin == '0') &amp;amp;&amp;amp; ($keystore_system == '0') ]]; then
    ${LOGGER} &amp;quot;Java DRS cert chain install not needed.&amp;quot;

# Both checks came back as failed, report it
elif [[ ($keystore_plugin == '1') &amp;amp;&amp;amp; ($keystore_system == '1') ]]; then
    ${LOGGER} &amp;quot;Java DRS cert chain install into all keystores failed.&amp;quot;

# Both checks came back correctly, report success for all keystores
elif [[ ($keystore_plugin == '2') &amp;amp;&amp;amp; ($keystore_system == '2') ]]; then
    ${LOGGER} &amp;quot;Java DRS cert chain install into all keystores complete.&amp;quot;

# Java Plugin installation not needed, System Java keystore succeeded.
elif [[ ($keystore_plugin == '0') &amp;amp;&amp;amp; ($keystore_system == '2') ]]; then
    ${LOGGER} &amp;quot;Java DRS cert chain install into Java Plugin keystore not needed.&amp;quot;
    ${LOGGER} &amp;quot;Java DRS cert chain install into System Java keystore successful.&amp;quot;

# Java Plugin installation succeeded, System Java keystore not needed.
elif [[ ($keystore_plugin == '2') &amp;amp;&amp;amp; ($keystore_system == '0') ]]; then
    ${LOGGER} &amp;quot;Java DRS cert chain install into Java Plugin keystore successful.&amp;quot;
    ${LOGGER} &amp;quot;Java DRS cert chain install into System Java keystore not needed.&amp;quot;

# Java Plugin installation not needed, System Java keystore failed.
elif [[ ($keystore_plugin == '0') &amp;amp;&amp;amp; ($keystore_system == '1') ]]; then
    ${LOGGER} &amp;quot;Java DRS cert chain install into Java Plugin keystore not needed.&amp;quot;
    ${LOGGER} &amp;quot;Java DRS cert chain install into System Java keystore failed.&amp;quot;

# Java Plugin installation failed, System Java keystore not needed.
elif [[ ($keystore_plugin == '1') &amp;amp;&amp;amp; ($keystore_system == '0') ]]; then
    ${LOGGER} &amp;quot;Java DRS cert chain install into Java Plugin keystore failed.&amp;quot;
    ${LOGGER} &amp;quot;Java DRS cert chain install into System Java keystore not needed.&amp;quot;

# Java Plugin installation failed, System Java keystore succeeded.
elif [[ ($keystore_plugin == '1') &amp;amp;&amp;amp; ($keystore_system == '2') ]]; then
    ${LOGGER} &amp;quot;Java DRS cert chain install into Java Plugin keystore failed.&amp;quot;
    ${LOGGER} &amp;quot;Java DRS cert chain install into System Java keystore successful.&amp;quot;

# Java Plugin installation succeeded, System Java keystore failed.
elif [[ ($keystore_plugin == '2') &amp;amp;&amp;amp; ($keystore_system == '1') ]]; then
    ${LOGGER} &amp;quot;Java DRS cert chain install into Java Plugin keystore successful.&amp;quot;
    ${LOGGER} &amp;quot;Java DRS cert chain install into System Java keystore failed.&amp;quot;

exit 0

Read More