Downloading, Caching & Decoding Images Asynchronously with Alamofire: Part 2 (Swift 4)

Posted 8/11/2018.

Updated for Swift 4, Xcode 9.4.1, Alamofire 4.7.3, and AlamofireImage 3.3.1.

In Part 1 of this tutorial, I explained how to use the Alamofire and AlamofireImage libraries to asynchronously download and cache images to be displayed in a UICollectionView. At the end of that project, we still had a performance problem. UIImage by default waits until right before display to decode an image, and it does this decoding synchronously. Particularly when using prototype cells with large images in a UICollectionView or UITableView, this decoding step can cause a noticeable stutter while scrolling.

We'll continue where Part 1 left off by adding asynchronous image decoding to the Alamofire workflow.

If you haven't read Part 1 yet, you may want to read it first to familiarize yourself with the Glacier Scenics project we're building on here. Alternatively, you can download the completed project at the end of Part 1 in the GlacierScenicsAlamofire_Swift_4 folder here and continue from there.

The Problem

There are two steps in the decode process. Most of the images displayed in an iOS app use a compressed image format, so the first step is to inflate the image data. AlamofireImage actually does this part for us. You can see how the framework does this in the "af_inflate" function found in UIImage+AlamofireImage.swift.

The second step is to pre-draw the image so UIImageView does not need to do this at the last minute. If you run the project created in Part 1 in Instruments using the Time Profiler, you'll notice three symbols in the main thread on CA::Render, including prepare_image, copy_image, and create_image. These are relatively expensive operations that we would like to move off the main thread. This is what we will focus on in this tutorial.

Decoding

Let's start by taking a look at the updated PhotosManager class that will now handle image decoding in addition to downloading and caching.

class PhotosManager {
    
    lazy var photos: [Photo] = {
        guard let url = Bundle.main.url(forResource: "GlacierScenics", withExtension: "plist"),
            let data = try? Data(contentsOf: url),
            let photos = try? PropertyListDecoder().decode([Photo].self, from: data) else { return [] }
        return photos
    }()
    
    let decodeQueue: OperationQueue = {
        let queue = OperationQueue()
        queue.underlyingQueue = DispatchQueue(label: "com.GlacierScenics.imageDecoder", attributes: .concurrent)
        queue.maxConcurrentOperationCount = 4
        return queue
    }()
    
    let imageCache = AutoPurgingImageCache(
        memoryCapacity: UInt64(100).megabytes(),
        preferredMemoryUsageAfterPurge: UInt64(60).megabytes()
    )
    
    //MARK: - Image Downloading
    
    func retrieveImage(for url: String, completion: @escaping (UIImage) -> Void) -> ImageRequest {
        let queue = decodeQueue.underlyingQueue
        let request = Alamofire.request(url, method: .get)
        let imageRequest = ImageRequest(request: request)
        let serializer = DataRequest.imageResponseSerializer()
        imageRequest.request.response(queue: queue, responseSerializer: serializer) { response in
            guard let image = response.result.value else { return }
            imageRequest.decodeOperation = self.decode(image) { image in
                completion(image)
                self.cache(image, for: url)
            }
        }
        return imageRequest
    }
    
    //MARK: - Image Caching
    
    func cache(_ image: Image, for url: String) {
        imageCache.add(image, withIdentifier: url)
    }
    
    func cachedImage(for url: String) -> Image? {
        return imageCache.image(withIdentifier: url)
    }
    
    //MARK: - Image Decoding
    
    func decode(_ image: UIImage, completion: @escaping (UIImage) -> Void) -> DecodeOperation {
        let operation = DecodeOperation(image: image, completion: completion)
        decodeQueue.addOperation(operation)
        return operation
    }
    
}

There are two classes here we haven't seen yet, an Operation subclass called DecodeOperation and ImageRequest. Don't worry, we'll go over these in detail below! I wanted to start with the changes to PhotosDataManager because it outlines the broader approach we'll be using here.

To setup for image decoding, the class now has an OperationQueue property called decodeQueue. When we finish downloading the image with Alamofire, we'll want to continue with the decoding on another background thread. Since the main queue might be busy, we want to avoid jumping to the main queue and then kicking off decoding on another queue. Alamofire accommodates this by allowing us to specify which queue we want the response to be dispatched onto. However, it only accepts GCD dispatch queues, and we want to use an OperationQueue to manage our decode operations.

Luckily, OperationQueue uses GCD, and each operation queue has an underlyingQueue property of type DispatchQueue. So we'll start retrieveImage by grabbing this underlying queue so that we can set it on the Alamofire response. Also, now that we need to do more configuration on the Alamofire request, we can't use AlamofireImage's responseImage helper function, which uses the library's image response serializer directly. That's OK, we can still use the image response serializer below. But first, notice that we're returning an ImageRequest from retrieveImage instead of an Alamofire Request object. Let's look at this class.

import UIKit
import Alamofire

class ImageRequest {
    
    var decodeOperation: Operation?
    var request: DataRequest
    
    init(request: DataRequest) {
        self.request = request
    }
    
    func cancel() {
        decodeOperation?.cancel()
        request.cancel()
    }
    
}

As you can see, this class is basically just a wrapper around an Alamofire Request that groups it with an Operation, which may exist for decoding. Recall from Part 1 that since collection view cells are reused, we need to cancel any existing network requests before reusing a cell. Otherwise, a request could come back and populate the cell with the wrong image. The same goes for a decoding operation. At the time the cell is reused, three states are possible. One is that no request or decode operation is in flight. Second is that a request is in flight but has not returned yet, and therefore no decode operation has started. And finally we could be in the state where the request has finished and decoding has started. So the ImageRequest class allows us to group the request and decode operation together, making one call to cancel any request or decode operation that is in progress.

OK, back to the retrieveImage function in PhotosManager. We continue by creating the ImageRequest object and then call response, a function that takes a dispatch queue, response serializer, and completion handler, on the ImageRequest's request. We pass decodeQueue's underlying queue as mentioned above. Then we pass AlamofireImage's image response serializer. And finally we pass our completion block.

In the completion block, we start by making sure we got an image back from the request. Since we used the imageResponseSerializer, we're expecting the result value to be an image. If we don't have an image, we simply return. Otherwise, we asynchronously decode the image.

Below, we've defined a decodeImage function that creates a DecodeOperation, adds it to the decodeQueue, and then returns the newly created operation. This function will kick off the decoding, but retrieveImage still needs to keep track of the returned operation so that the operation can be canceled. So back in retrieveImage, we set the decode operation property on the ImageRequest object.

The decodeImage function also takes a completion block which it passes on to the DecodeOperation. In the completion (again back in retrieveImage), we'll call the completion passed into retrieveImage with the decoded image and also cache the decoded image. Finally, at the bottom of retrieveImage, we return the ImageRequest so that it can be canceled.

We're now ready to look at our Operation subclass, DecodeOperation, which will decode the image.

import UIKit

class DecodeOperation: Operation {
    
    let image: UIImage
    let completion: (UIImage) -> Void
    
    init(image: UIImage, completion: @escaping (UIImage) -> Void) {
        self.image = image
        self.completion = completion
    }
    
    override func main() {
        if isCancelled { return }
        let decodedImage = decode(image)
        if isCancelled { return }
        
        OperationQueue.main.addOperation {
            self.completion(decodedImage)
        }
    }
    
    func decode(_ image: UIImage) -> UIImage {
        guard let cgImage = image.cgImage else { return image }
        let size = CGSize(width: cgImage.width, height: cgImage.height)
        guard let context = CGContext(data: nil, width: Int(size.width), height: Int(size.height), bitsPerComponent: 8, bytesPerRow: 0, space: CGColorSpaceCreateDeviceRGB(), bitmapInfo: CGImageAlphaInfo.noneSkipLast.rawValue) else { return image }
        context.draw(cgImage, in: CGRect(origin: .zero, size: size))
        guard let decodedImage = context.makeImage() else { return image }
        return UIImage(cgImage: decodedImage, scale: image.scale, orientation: image.imageOrientation)
    }
    
}

As we saw in the decodeImage function in PhotosManager, a DecodeOperation is initialized with the image to be decoded and a completion block. Then we just need to override the main function from Operation and implement the actual decoding. In the main function, if the operation has already been canceled, we return. Otherwise we start the image decoding by calling decode. Since the decode may take time, we have to check again after making the call whether the operation has been canceled. If it hasn't, we finish by calling the completion block with the decoded image on the main queue. This is the completion that has been passed into retrieveImage from all the way back in our collection view cell. That updates the UI and therefore needs to be on the main queue.

Then in decode, we create a context, draw the image to its size, and return a UIImage with the decoded image using the input image's scale and orientation.

Note that we're specifying CGImageAlphaInfo.noneSkipLast for the bitmapInfo in the CGContext initializer. The images we're getting from the network are opaque, and telling the processor that our images are opaque helps with scroll performance. If the images you're downloading in your projects have alpha channels, you'll want to use a different setting.

The last step is to modify our collection view cell to keep track of an ImageRequest instead of a Request. That is the only change required for the cell to incorporate decoding.

An important note is that we're not decoding when an image is returned from the cache. That's only because AlamofireImage's AutoPurgingImageCache, which we are using, is an in-memory cache. That means that once an image has been decoded and the decoded image has been stored in memory, we don't need to decode it again. However, if the cache were stored on disk and persisted between launches, we would need to decode images returned from the cache asynchronously. In that case we would just keep track of a DecodeOperation and cancel that in reset as well. And if the image is purged from the cache within a session, the cell with call retrieveImage on the PhotosManager again and the image will be decoded.

With collection view scrolling performance, the goal is to return from cellForItemAt:indexPath as quickly as possible. Decoding images asynchronously helps us do that so we can reach the 60 FPS standard we're targeting on iOS.

The source code for this project is available in the "GlacierScenicsAlamofireImageDecoding_Swift_4" folder here.