Kestrel 168170caca Add documentation to index_cols! macro. | 7 місяців тому | |
---|---|---|
microrm | 7 місяців тому | |
microrm-macros | 7 місяців тому | |
.gitignore | 2 роки тому | |
.vimrc | 2 роки тому | |
Cargo.toml | 9 місяців тому | |
README.md | 2 роки тому | |
rust-analyzer.json | 7 місяців тому |
microrm
is a crate providing a lightweight ORM on top of SQLite.
Unlike fancier ORM systems, microrm is intended to be extremely lightweight
and code-light, which means that by necessity it is opinionated, and thus
lacks the power and flexibility of, say, SeaORM or Diesel. In particular,
microrm
currently makes no attempts to provide database migration support.
microrm
provides two components: modeling and querying. The intention is
that the modelling is built statically; dynamic models are not directly
supported though are possible. However, since by design microrm does not
touch database contents for tables not defined in its model, using raw SQL
for any needed dynamic components may be a better choice.
Querying supports a small subset of SQL expressed as type composition; see
QueryInterface
for more details.
A simple example using an SQLite table as an (indexed) key/value store might look something like this:
use microrm::prelude::*;
use microrm::{Entity,make_index};
#[derive(Debug,Entity,serde::Serialize,serde::Deserialize)]
pub struct KVStore {
pub key: String,
pub value: String
}
// the !KVStoreIndex here means a type representing a unique index named KVStoreIndex
make_index!(!KVStoreIndex, KVStore::Key);
let schema = microrm::Schema::new()
.entity::<KVStore>()
.index::<KVStoreIndex>();
// dump the schema in case you want to inspect it manually
for create_sql in schema.create() {
println!("{};", create_sql);
}
let db = microrm::DB::new_in_memory(schema).unwrap();
let qi = db.query_interface();
qi.add(&KVStore {
key: "a_key".to_string(),
value: "a_value".to_string()
});
// because KVStoreIndex indexes key, this is a logarithmic lookup
let qr = qi.get().by(KVStore::Key, "a_key").one().expect("No errors encountered");
assert_eq!(qr.is_some(), true);
assert_eq!(qr.as_ref().unwrap().key, "a_key");
assert_eq!(qr.as_ref().unwrap().value, "a_value");
The schema output from the loop is (details subject to change based on internals):
CREATE TABLE IF NOT EXISTS "kv_store" (id integer primary key,"key" text,"value" text);
CREATE UNIQUE INDEX "kv_store_index" ON "kv_store" ("key");
If you're using microrm
in a threaded or async environment, you'll need to
use a DBPool
. You can then write code like this:
# use microrm::prelude::*;
# use microrm::{Entity,make_index};
# #[derive(Debug,Entity,serde::Serialize,serde::Deserialize)]
# pub struct KVStore {
# pub key: String,
# pub value: String
# }
async fn insert_a(dbp: µrm::DBPool<'_>) {
let qi = dbp.query_interface();
qi.add(&KVStore {
key: "a_key".to_string(),
value: "a_value".to_string()
});
}
async fn insert_b(dbp: µrm::DBPool<'_>) {
let qi = dbp.query_interface();
qi.add(&KVStore {
key: "b_key".to_string(),
value: "b_value".to_string()
});
}
# async_std::task::block_on(async { main().await });
// running in your favourite async runtime
async fn main() {
# let schema = microrm::Schema::new().entity::<KVStore>();
let db = microrm::DB::new_in_memory(schema).unwrap();
let dbp = microrm::DBPool::new(&db);
let a = insert_a(&dbp);
let b = insert_b(&dbp);
b.await;
a.await;
let qi = dbp.query_interface();
let qr = qi.get().by(KVStore::Key, "a_key").one().unwrap();
assert_eq!(qr.is_some(), true);
assert_eq!(qr.as_ref().unwrap().key, "a_key");
assert_eq!(qr.as_ref().unwrap().value, "a_value");
let qr = qi.get().by(KVStore::Key, "b_key").one().unwrap();
assert_eq!(qr.is_some(), true);
assert_eq!(qr.as_ref().unwrap().key, "b_key");
assert_eq!(qr.as_ref().unwrap().value, "b_value");
}
Note that between acquiring a [QueryInterface
] reference and dropping it, you
must not .await
anything; the compiler will (appropriately) complain.