You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have been doing open source for a while, you may have met some of our work on GitHub or read some of ourstories. We don’t try to reinvent the wheel, but there are many components we need specifically for our workflow, or things that need to be customised for the apps we are building. So we built many frameworks and apps. And as we are using them in our production apps, we think it might be a good idea to share them with the world. This is a win win situation since we contribute back to the community, while getting lots of feedback and advice. Being a small iOS team, doing client projects full time while trying to get a bit of free time to work on open source is very challenging.
Open source is all about building abstractions. By separating responsibilities and making reusable frameworks, we learn the most of Swift, as well as grasping some nifty nitty details about the APIs we are working with. But we have never told about how we do things. So there will be a series of our open source stories, detailing the technical aspects behind our work as long as the open source experience.
Firstly, let’s talk about Cache, a framework to persist object. Here we learn how to evolve the APIs to support new features of Swift language and iOS, tvOS platforms, while ensuring them flexible and maintainable enough.
Cache
Cache doesn’t claim to be unique in this area, but it’s not another monster library that gives you a god’s power. It does nothing but caching, but it does it well. It offers a good public API with out-of-box implementations and great customization possibilities
There can be many solutions for caching in iOS platforms, like SQLite, CoreData or Realm, or other 3rd libraries. What we want from Cache is a simple way to store some JSON data to disk, expiry management and a APIs we are comfortable with.
From a user perspective, I want to reliably save and load an object using a key, and the ability to do that either synchronously or asynchronously. Here are the APIs we aimed to achieve.
In the first releases, we introduced Cachable protocol, given the fact that objects should be serialised and deserialised to Data for disk storage. We also conform most primitive types to Cachable, so users don’t have to do this themselves. For memory storage, we use NSCache under the hood so object will be saved as is.
Cache is based on the concept of having front- and back- caches. A request to a front cache should be less time and memory consuming (NSCache is used by default here). The difference between front and back caching is that back caching is used for content that outlives the application life-cycle. See it more like a convenient way to store user information that should persist across application launches. Disk cache is the most reliable choice here
letcache=HybridCache(name:"Custom", config: config)try cache.addObject("This is a string", forKey:"string", expiry:.never)letentry:CacheEntry<String>?= cache.cacheEntry(forKey:"string")print(entry?.object)letimage:UIImage?= cache.object(forKey:"image")
HybridCache has generic functions with Cachable type constraints, so it is type safe for all Cachable conformances
publicclassHybridCache:BasicHybridCache{/** Adds passed object to the front and back cache storages. - Parameter object: Object that needs to be cached - Parameter key: Unique key to identify the object in the cache - Parameter expiry: Expiration date for the cached object */publicfunc addObject<T:Cachable>(_ object:T, forKey key:String, expiry:Expiry?=nil)throws{try manager.addObject(object, forKey: key, expiry: expiry)}}
even for custom types, and UIImage
structUser:Cachable{staticfunc decode(_ data:Data)->User?{varobject:User?// Decode your object from datareturn object
}func encode()->Data?{vardata:Data?// Encode your object to datareturn data
}}
Async
Cache is sync by default, it means all methods are blocking. To access cache in an async manner, there’s a convenient async that leads to AsyncHybridCache . All shares the same CacheManager under the hood, so all objects remain the same, they are just different interfaces.
// Add object to cache
cache.async.addObject("This is a string", forKey:"string"){ error inprint(error)}// Get object from cache
cache.async.object(forKey:"string"){(string:String?)inprint(string)// Prints "This is a string"}
JSON
As we deal with JSON most of the time, there’s a JSON enum that encapsulate top level JSON object or JSON array, using JSONSerialization to convert to Data for Cachable conformance
/** Helper enum to work with JSON arrays and dictionaries. */publicenumJSON{/// JSON arraycase array([Any])/// JSON dictionarycase dictionary([String:Any])/// Converts value to Anypublicvarobject:Any{
switch self{case.array(let object):return object
case.dictionary(let object):return object
}}}
Codable in Swift 4
One of the most important feature of Swift 4 is Codable . Types conforming to Codable can be mapped to and from JSON . There’s JSONSerialization under the hood, but all we need to care is to conform our types to Codable , and ensure our model properties matches those keys in JSON data. How cool it is to just declare a model, decode it from JSON and persist it to Cache, all without any hassle? So in Cache 4.0 we refactored the public APIs to better support Codable.
Given the rumour that NSCache will be renamed to Cache , and to avoid our struct Cache stealing the Cache namespace, we rename our Cache classes to Storage . With better encapsulated Config objects for disk and memory storages, declaring your own Storage is easy.
Cache is built based on Chain-of-responsibility pattern, in which there are many processing objects, each knows how to do 1 task and delegates to the next one. But that’s just implementation detail. All you need to know is Storage, it saves and loads Codable objects.
Storages are designed Chain of responsibility pattern in mind, where each Storage acts as a processing object. We deal with Storage only, but there is a chain under the hood
Each processing object contains logic that defines the types of command objects that it can handle; the rest are passed to the next processing object in the chain
/// Manage storage. Use memory storage if specified./// Synchronous by default. Use `async` for asynchronous operations.publicclassStorage{/// Used for sync operationsfileprivateletsync:StorageAware/// Storage used internally by both sync and async storagesprivateletinteralStorage:StorageAware/// Initialize storage with configuration options.////// - Parameters:/// - diskConfig: Configuration for disk storage/// - memoryConfig: Optional. Pass confi if you want memory cache/// - Throws: Throw StorageError if any.publicrequiredinit(diskConfig:DiskConfig, memoryConfig:MemoryConfig?=nil)throws{// Disk or Hybridletstorage:StorageAwareletdisk=tryDiskStorage(config: diskConfig)
if let memoryConfig = memoryConfig {letmemory=MemoryStorage(config: memoryConfig)
storage =HybridStorage(memoryStorage: memory, diskStorage: disk)}else{
storage = disk
}// Wrapperself.interalStorage =TypeWrapperStorage(storage: storage)// Syncself.sync =SyncStorage(storage: interalStorage,
serialQueue:DispatchQueue(label:"Cache.SyncStorage.SerialQueue"))}/// Used for async operationspublic lazy varasync:AsyncStorage=AsyncStorage(storage:self.interalStorage,
serialQueue:DispatchQueue(label:"Cache.AsyncStorage.SerialQueue"))}
Storage deals with constructing inner Storages based on passed in configurations. SyncStorage deals with managing serial queue for asynchronous access. HybridStorage coordinates MemoryStorage and DiskStorage , …
What is TypeWrapper?
Primitive types like Int, String, Bool, … conforms to Codable , so it is perfectly fine to call storage.save(“a string”, forKey: “myKey”) as the compiler is happy. But as we are using JSONEncoder and JSONDecoder under the hood, simply using primitive types can lead to run time exception like “Top-level T encoded as number JSON fragment” or “Expected to decode T but found a dictionary instead.”, and that was the reason of the PrimitiveStorage
Here we need to catch those error and use PrimitiveWrapper in case of error, so that we always have a top level object that can be serialised to and from JSON data.
extensionPrimitiveStorage:StorageAware{publicfunc entry<T:Codable>(forKey key:String)throws->Entry<T>{do{returntry internalStorage.entry(forKey: key)asEntry<T>}catchlet error as Swift.DecodingError{// Expected to decode T but found a dictionary instead.
switch error {case.typeMismatch(_,let context)where context.codingPath.isEmpty:letwrapperEntry=try internalStorage.entry(forKey: key)asEntry<PrimitiveWrapper<T>>letprimitiveEntry=Entry(object: wrapperEntry.object.value,
expiry: wrapperEntry.expiry)return primitiveEntry
default:throw error
}}}publicfunc setObject<T:Codable>(_ object:T, forKey key:String,
expiry:Expiry?=nil)throws{do{try internalStorage.setObject(object, forKey: key, expiry: expiry)}catchlet error as Swift.EncodingError{// Top-level T encoded as number JSON fragment
switch error {case.invalidValue(_,let context)where context.codingPath.isEmpty:letwrapper=PrimitiveWrapper<T>(value: object)try internalStorage.setObject(wrapper, forKey: key, expiry: expiry)default:
break
}}}}
Where PrimitiveWrapper is a simple generic struct with Codable constraint
structPrimitiveWrapper<T:Codable>:Codable{letvalue:Tinit(value:T){self.value = value
}}
Later I though that it would be less code if we could just always perform wrapping, and that lead to my Add TypeWrapperStorage pull request. This way the code is easy to reason, but the overhead is there.
To make all Storage easy “chainable”, they all conform to StorageAware protocol, which defines a set of minimal functions a Storage must support.
/// A protocol used for saving and loading from storagepublicprotocolStorageAware{func object<T:Codable>(ofType type:T.Type, forKey key:String)throws->Tfunc entry<T:Codable>(ofType type:T.Type, forKey key:String)throws->Entry<T>func removeObject(forKey key:String)throwsfunc setObject<T:Codable>(_ object:T, forKey key:String, expiry:Expiry?)throwsfunc existsObject<T:Codable>(ofType type:T.Type, forKey key:String)throws->Boolfunc removeAll()throwsfunc removeExpiredObjects()throws}
The cool thing about this is that we can leverage protocol extension in Swift to provide default implementation for StorageAware conformers. From Entry info, we can infer the object and whether it exists or not.
All functions have Codable constraints, so we have a very type safe experience.
// Save to storagetry? storage.setObject(10, forKey:"score")try? storage.setObject("Oslo", forKey:"my favorite city", expiry:.never)// Loadletscore=try? storage.object(ofType:Int.self, forKey:"score")letfavoriteCharacter=try? storage.object(ofType:String.self, forKey:"my favorite city")// Check if an object existslethasFavoriteCharacter=try? storage.existsObject(ofType:String.self, forKey:"my favorite city")
Sync and Async
Storage is sync by default. You may have noticed that all sync functions are marked with throws with error type StorageError . We have designed that try catch is for sync, and Result is for async. For async, we can’t do try catch as the result will be delivered at a later time. So we use completion closure to invoke the caller about result asynchronously.
All async Storage conform to AsyncStorageAware protocol just as we do for StorageAware . To guarantee that no read and write happen at the same time, we use serial DispatchQueue to dispatch operations in order
As we want to support both sync and async operations on the same Storage, we initially share 1 serial queue between SyncStorage and AsyncStorage , so no matter how many operations get executed, they are all in safely order. But as we also use serialQueue.sync for SyncStorage to get blocking behaviour, and serialQueue.async for AsyncStorage , this can cause deadlock !!! So eventually we went with different DispatchQueue for SyncStorage and AsyncStorage , this trades off the deadlock for the chances of critical section access violation if user call sync and async interchangeably.
But image does not conform to Codable
UIImage and NSImage do not conform to Codable. Since we designed the APIs to exclusively support Codable , working with images is tricky. Simply conform UIImage to Codable does not work, and it does not make sense to do so.
// WARNING: This does not compileextensionUIImage:Codable{// 'required' initializer must be declared directly in class 'UIImage' (not in an extension)publicrequiredinit(from decoder:Decoder)throws{letcontainer=try decoder.singleValueContainer()letdata=try container.decode(Data.self)
guard let image =UIImage(data: data)else{throwMyError.decodingFailed
}// A non-failable initializer cannot delegate to failable initializer 'init(data:)' written with 'init?'self.init(data: data)}publicfunc encode(to encoder:Encoder)throws{varcontainer= encoder.singleValueContainer()
guard let data =UIImagePNGRepresentation(self)else{return}try container.encode(data)}}
Essentially, for images user should save them as Data to disk, and persist their file URL in Storage instead. But to support the unified experience as Codable , we introduced ImageWrapper . If existing types like UIImage can’t conform to Codable, then a wrapper can
publicstructImageWrapper:Codable{publicletimage:ImagepublicenumCodingKeys:String,CodingKey{case image
}publicinit(image:Image){self.image = image
}publicinit(from decoder:Decoder)throws{letcontainer=try decoder.container(keyedBy:CodingKeys.self)letdata=try container.decode(Data.self, forKey:CodingKeys.image)
guard let image =Image(data: data)else{throwStorageError.decodingFailed
}self.image = image
}publicfunc encode(to encoder:Encoder)throws{varcontainer= encoder.container(keyedBy:CodingKeys.self)
guard let data = image.cache_toData()else{throwStorageError.encodingFailed
}try container.encode(data, forKey:CodingKeys.image)}}
Then all you have to do is to wrap UIImage inside ImageWrapper , and get the same APIs support as Codable
But this beautiful API has a caveat for overhead. It may not be a big deal, but for libraries that depends on Cache like Imaginary, where storing and fetching images a lot, can be a huge problem. Here is how the object is on disk.
There’s that top level JSON object and image conversion to string that cause the overhead. Ideally image should be saved as just Data
Working around
How about Any?
One way to support UIImage is to remove the Codable constraint, and use Any , something like below
// WARNING: Does not compileextensionStorage{func save(object:Any, forKey:String){
switch object {caseletimage as Image:letdata=UIImagePNGRepresentation(image)caseletobject as Codable:letencoder=JSONEncoder()try? encoder.encode(object)// Cannot invoke 'encode' with an argument list of type '(Codable)'default:
break
}}}
storage.save(15, forKey:"number")
storage.save(image, forKey:"image")
The difference between UIImage and Codable is how they can be converted in to Data , and we need Data convertible objects for disk storage. So we need to encapsulate just this requirement, start with DataConvertible protocol and make UIImage and Codable conform to it
protocolDataConvertible{func toData()->Datastaticfunc fromData()->Self}// WARNING: Does not compileextensionCodable:DataConvertible{// Non-nominal type 'Codable' (aka 'Decodable & Encodable') cannot be extended}classMyStorage{func save(dataConvertible:DataConvertible, forKey key:String){save(data: dataConvertible.toData(), forKey: forKey)}func save(data:Data, forKey key:String){
diskStorage.save(data, forKey: String)}}
It is, however, not that easy. We can’t conform existing protocol Codable to our protocol.
This approach is not feasible we don’t go with it.
How about Data producer?
Protocol extension for Codable simply does not work. Let’s go back to object composition with class DataProducer , this has generic Codable constraint, while storing either UIImage or Codable . So when asking it to produce Data , it checks for whether it has Codable or UIImage .
classDataProducer<T:Codable>{letobject:T?letimage:Image?init(object:T){self.object = object
self.image =nil}init(image:Image){self.image = image
self.object =nil}func toData()->Data{
if let object = object {letencoder=JSONEncoder()returntry! encoder.encode(object)}else if let image = image {return image.cache_toData()!
}else{returnData()}}}classMyStorage{func save<T>(dataProducer:DataProducer<T>, forKey key:String){save(data: dataProducer.toData(), forKey: key)}func save(data:Data, forKey key:String){
diskStorage.save(data, forKey: String)}}letstorage=MyStorage()
storage.save(dataProducer:DataProducer<String>(object:"hello world"), forKey:"string")
This approach is feasible, and compiles well. For disk storage, we call toData to produce data, and for memory storage we can just set the inner object to NSCache . But the need to specify DataProducer as a wrapper does not often make users happy. We need a different approach.
Generic Storage
As of Cache 5.0, we tackle these overhead and pure UIImage support, while allowing Cache to be flexible and easily customisable.
Transforming type
A better way to support both UIImage and Codable is to have a generic Storage where we can transform the type. This way we can transform to support other custom types if we want.
This way Storage is extremely type safe, you save and load objects with the same type at a time. But the Storage can be transformable, the underlying storage mechanism remains the same, it’s just the public APIs support a different type. This is for the case when user wants to save both Codable and UIImage to the same storage. However we still recommend to use different storage for each types.
StorageAware
All Storage still conforms to StorageAware so we have some nice default implementation in StorageAware protocol extension. Note that since Storage is generic, our StorageAware now have associatedtype T to reflect the generic value type in Storage .
/// A protocol used for saving and loading from storagepublicprotocolStorageAware{associatedtypeT...}
Since we can’t define protocol with generic constraint as variable, we can freely chain all Storage as before. Now in Cache we have a fixed dependencies, it means that SyncStorage now explicitly specifies HybridStorage . We do, however, expose all the Storage as public, so you can compose them the way you want. But the default Storage composition should be good in most cases.
Transformer
When users specify Storage type, they need to specify a Transformer as well. It is data structure that contains 2 functions fromData and toData . This is needed for DiskStorage as we must support a way for the generic type to be Data convertible.
To transform Storage to a new type, we simply move all the internal objects inside the Storage to the new Storage , they are all reference types so there’s overhead. We also need to specify the Transformer for the new types. Every Storage has a transform function
You should definitely take a look at the tests on how powerful Storage transformation is. Whenever a Storage is transformed, it is constrained to a new type so all operations are type safe, however all objects are saved to the same location.
The new generic Storage APIs is type safe and flexible. It also reduces all the workaround overhead. We have used it in our image fetcher, Imaginary, to improve performance.
The only “constant” is “change”. The Apple platforms and the Swift programming languages evolve faster than you think, and it’s good that our frameworks take advantage of all the new features. Hope you find Cache and our stories in this refactoring journey useful.
The text was updated successfully, but these errors were encountered:
Original post https://medium.com/hyperoslo/open-source-stories-from-cachable-to-generic-storage-in-cache-418d9a230d51
We have been doing open source for a while, you may have met some of our work on GitHub or read some of our stories. We don’t try to reinvent the wheel, but there are many components we need specifically for our workflow, or things that need to be customised for the apps we are building. So we built many frameworks and apps. And as we are using them in our production apps, we think it might be a good idea to share them with the world. This is a win win situation since we contribute back to the community, while getting lots of feedback and advice. Being a small iOS team, doing client projects full time while trying to get a bit of free time to work on open source is very challenging.
Open source is all about building abstractions. By separating responsibilities and making reusable frameworks, we learn the most of Swift, as well as grasping some nifty nitty details about the APIs we are working with. But we have never told about how we do things. So there will be a series of our open source stories, detailing the technical aspects behind our work as long as the open source experience.
Firstly, let’s talk about Cache, a framework to persist object. Here we learn how to evolve the APIs to support new features of Swift language and iOS, tvOS platforms, while ensuring them flexible and maintainable enough.
Cache
There can be many solutions for caching in iOS platforms, like SQLite, CoreData or Realm, or other 3rd libraries. What we want from Cache is a simple way to store some JSON data to disk, expiry management and a APIs we are comfortable with.
From a user perspective, I want to reliably save and load an object using a key, and the ability to do that either synchronously or asynchronously. Here are the APIs we aimed to achieve.
Cachable
In the first releases, we introduced Cachable protocol, given the fact that objects should be serialised and deserialised to Data for disk storage. We also conform most primitive types to Cachable, so users don’t have to do this themselves. For memory storage, we use NSCache under the hood so object will be saved as is.
HybridCache has generic functions with Cachable type constraints, so it is type safe for all Cachable conformances
even for custom types, and UIImage
Async
Cache is sync by default, it means all methods are blocking. To access cache in an async manner, there’s a convenient async that leads to AsyncHybridCache . All shares the same CacheManager under the hood, so all objects remain the same, they are just different interfaces.
JSON
As we deal with JSON most of the time, there’s a JSON enum that encapsulate top level JSON object or JSON array, using JSONSerialization to convert to Data for Cachable conformance
Codable in Swift 4
One of the most important feature of Swift 4 is Codable . Types conforming to Codable can be mapped to and from JSON . There’s JSONSerialization under the hood, but all we need to care is to conform our types to Codable , and ensure our model properties matches those keys in JSON data. How cool it is to just declare a model, decode it from JSON and persist it to Cache, all without any hassle? So in Cache 4.0 we refactored the public APIs to better support Codable.
Given the rumour that NSCache will be renamed to Cache , and to avoid our struct Cache stealing the Cache namespace, we rename our Cache classes to Storage . With better encapsulated Config objects for disk and memory storages, declaring your own Storage is easy.
Chain of responsibility
Storages are designed Chain of responsibility pattern in mind, where each Storage acts as a processing object. We deal with Storage only, but there is a chain under the hood
Storage -> SyncStorage -> TypeWrapperStorage -> HybridStorage -> DiskStorage & MemoryStorage
Storage deals with constructing inner Storages based on passed in configurations. SyncStorage deals with managing serial queue for asynchronous access. HybridStorage coordinates MemoryStorage and DiskStorage , …
What is TypeWrapper?
Primitive types like Int, String, Bool, … conforms to Codable , so it is perfectly fine to call storage.save(“a string”, forKey: “myKey”) as the compiler is happy. But as we are using JSONEncoder and JSONDecoder under the hood, simply using primitive types can lead to run time exception like “Top-level T encoded as number JSON fragment” or “Expected to decode T but found a dictionary instead.”, and that was the reason of the PrimitiveStorage
Here we need to catch those error and use PrimitiveWrapper in case of error, so that we always have a top level object that can be serialised to and from JSON data.
Where PrimitiveWrapper is a simple generic struct with Codable constraint
Later I though that it would be less code if we could just always perform wrapping, and that lead to my Add TypeWrapperStorage pull request. This way the code is easy to reason, but the overhead is there.
StorageAware protocol
To make all Storage easy “chainable”, they all conform to StorageAware protocol, which defines a set of minimal functions a Storage must support.
The cool thing about this is that we can leverage protocol extension in Swift to provide default implementation for StorageAware conformers. From Entry info, we can infer the object and whether it exists or not.
All functions have Codable constraints, so we have a very type safe experience.
Sync and Async
Storage is sync by default. You may have noticed that all sync functions are marked with throws with error type StorageError . We have designed that try catch is for sync, and Result is for async. For async, we can’t do try catch as the result will be delivered at a later time. So we use completion closure to invoke the caller about result asynchronously.
All async Storage conform to AsyncStorageAware protocol just as we do for StorageAware . To guarantee that no read and write happen at the same time, we use serial DispatchQueue to dispatch operations in order
As we want to support both sync and async operations on the same Storage, we initially share 1 serial queue between SyncStorage and AsyncStorage , so no matter how many operations get executed, they are all in safely order. But as we also use serialQueue.sync for SyncStorage to get blocking behaviour, and serialQueue.async for AsyncStorage , this can cause deadlock !!! So eventually we went with different DispatchQueue for SyncStorage and AsyncStorage , this trades off the deadlock for the chances of critical section access violation if user call sync and async interchangeably.
But image does not conform to Codable
UIImage and NSImage do not conform to Codable. Since we designed the APIs to exclusively support Codable , working with images is tricky. Simply conform UIImage to Codable does not work, and it does not make sense to do so.
Essentially, for images user should save them as Data to disk, and persist their file URL in Storage instead. But to support the unified experience as Codable , we introduced ImageWrapper . If existing types like UIImage can’t conform to Codable, then a wrapper can
Then all you have to do is to wrap UIImage inside ImageWrapper , and get the same APIs support as Codable
But this beautiful API has a caveat for overhead. It may not be a big deal, but for libraries that depends on Cache like Imaginary, where storing and fetching images a lot, can be a huge problem. Here is how the object is on disk.
There’s that top level JSON object and image conversion to string that cause the overhead. Ideally image should be saved as just Data
Working around
How about Any?
One way to support UIImage is to remove the Codable constraint, and use Any , something like below
This does not compile, as for Codable to work, the type needs to be known at compile time. It’s because protocol can’t conform to itself, read Using JSON Encoder to encode a variable with Codable as type and Protocol doesn’t conform to itself? for more detailed explanation.
So we don’t go with this approach.
How about Data Convertible?
The difference between UIImage and Codable is how they can be converted in to Data , and we need Data convertible objects for disk storage. So we need to encapsulate just this requirement, start with DataConvertible protocol and make UIImage and Codable conform to it
It is, however, not that easy. We can’t conform existing protocol Codable to our protocol.
This approach is not feasible we don’t go with it.
How about Data producer?
Protocol extension for Codable simply does not work. Let’s go back to object composition with class DataProducer , this has generic Codable constraint, while storing either UIImage or Codable . So when asking it to produce Data , it checks for whether it has Codable or UIImage .
This approach is feasible, and compiles well. For disk storage, we call toData to produce data, and for memory storage we can just set the inner object to NSCache . But the need to specify DataProducer as a wrapper does not often make users happy. We need a different approach.
Generic Storage
As of Cache 5.0, we tackle these overhead and pure UIImage support, while allowing Cache to be flexible and easily customisable.
Transforming type
A better way to support both UIImage and Codable is to have a generic Storage where we can transform the type. This way we can transform to support other custom types if we want.
This way Storage is extremely type safe, you save and load objects with the same type at a time. But the Storage can be transformable, the underlying storage mechanism remains the same, it’s just the public APIs support a different type. This is for the case when user wants to save both Codable and UIImage to the same storage. However we still recommend to use different storage for each types.
StorageAware
All Storage still conforms to StorageAware so we have some nice default implementation in StorageAware protocol extension. Note that since Storage is generic, our StorageAware now have associatedtype T to reflect the generic value type in Storage .
Since we can’t define protocol with generic constraint as variable, we can freely chain all Storage as before. Now in Cache we have a fixed dependencies, it means that SyncStorage now explicitly specifies HybridStorage . We do, however, expose all the Storage as public, so you can compose them the way you want. But the default Storage composition should be good in most cases.
Transformer
When users specify Storage type, they need to specify a Transformer as well. It is data structure that contains 2 functions fromData and toData . This is needed for DiskStorage as we must support a way for the generic type to be Data convertible.
Since Codable, Data, and UIImage are the most common format that we save and load to Storage , we provide default TransformerFactory
Transform functions
To transform Storage to a new type, we simply move all the internal objects inside the Storage to the new Storage , they are all reference types so there’s overhead. We also need to specify the Transformer for the new types. Every Storage has a transform function
You should definitely take a look at the tests on how powerful Storage transformation is. Whenever a Storage is transformed, it is constrained to a new type so all operations are type safe, however all objects are saved to the same location.
Function overloading
Another solution is to have function overloading
Where to go from here
The new generic Storage APIs is type safe and flexible. It also reduces all the workaround overhead. We have used it in our image fetcher, Imaginary, to improve performance.
The only “constant” is “change”. The Apple platforms and the Swift programming languages evolve faster than you think, and it’s good that our frameworks take advantage of all the new features. Hope you find Cache and our stories in this refactoring journey useful.
The text was updated successfully, but these errors were encountered: