Wiring
Wiring is the act of "connecting" together dependencies.
In di
, wiring is handled by the Dependant
API.
The general idea is that Container
accepts a Dependant
and then asks it for it's sub-dependencies.
These sub-dependencies are themselves Dependant
s, and so the Container
keeps asking them for their sub-dependenices until there are none.
But how does Dependant
know what it's dependencies are?
Every Dependant
has a call
attribute which is a callable (a class, a function, etc.) that which can be introspected (usually with inpsect.signature
) to find it's parameters.
The from these parameters the Dependant
determine's what it's dependencies are.
But how do we go from a parameter param: Foo
to a Dependant
?
There are actually several different mechanisms available:
Autowiring
Autowiring is available when the parameter's type annotation is a well-behaved type/class. Well behaved in this case means just means that it's parameters can be understood by di
, for example that they are type annotated and are uniquely identifiable (param: int
won't work properly).
Here is an example showing auto-wiring in action.
Auto-wiring can work with dataclasses, even ones with a default_factory
.
In this example we'll load a config from the environment:
from dataclasses import dataclass
from di import AsyncExecutor, Container, Dependant
@dataclass
class Config:
host: str = "localhost"
class DBConn:
def __init__(self, config: Config) -> None:
self.host = config.host
async def endpoint(conn: DBConn) -> None:
assert isinstance(conn, DBConn)
async def framework():
container = Container()
solved = container.solve(Dependant(endpoint, scope="request"), scopes=["request"])
async with container.enter_scope("request") as state:
await container.execute_async(solved, executor=AsyncExecutor(), state=state)
What makes this "auto-wiring" is that we didn't have to tell di
how to construct DBConn
: di
detected that controller
needed a DBConn
and that DBConn
in turn needs a Config
instance.
This is the simplest option because you don't have to do anything, but it' relatively limited in terms of what can be injected.
Autowiring metadata
To execute a dependency, di
needs both a callable target (a class, function, etc.) and some metadata, namely scope
and use_cache
.
Autowiring can discover the callable target from type annotations, but it cannot infer the metadata.
So metadata is just inherited from the parent dependency: in the example above, we declared endpoint
as having a "request"
scope, so all of the sub-dependencies that get auto-wired end up having the "request"
scope.
Dependency markers
Dependency markers, in the form of di.dependant.Marker
serve to hold information about a dependency, for example how to construct it or it's scope.
Markers are generally useful when:
- Injecting a non-identifiabletype, like a
list[str]
- Injecting the result of a function (
param: some_function
is not valid in Python) - The type being injected is not well-behaved and you need to tell
di
how to construct it - You want to attach metadata to the target (like explicitly setting the
scope
)
Let's take our previous example and look at how we would have used markers if DBConn
accepted a host: str
paramter instead of our Config
class directly:
from dataclasses import dataclass
from di import Container, Dependant, Marker, SyncExecutor
from di.typing import Annotated
@dataclass
class Config:
host: str = "localhost"
class DBConn:
def __init__(self, host: str) -> None:
self.host = host
def inject_db(config: Config) -> DBConn:
return DBConn(host=config.host)
def endpoint(conn: Annotated[DBConn, Marker(inject_db, scope="request")]) -> None:
assert isinstance(conn, DBConn)
def framework():
container = Container()
solved = container.solve(Dependant(endpoint, scope="request"), scopes=["request"])
with container.enter_scope("request") as state:
container.execute_sync(solved, executor=SyncExecutor(), state=state)
All we had to do was tell di
how to construct DBConn
(by assigning the parameter a Marker
) and di
can do the rest.
Note that we are still using autowiring for endpoint
and Config
, it's not all or nothing and you can mix and match styles.
A note on Annotated / PEP 593
Markers are set via PEP 593's Annotated.
This is in contrast to FastAPIs use of markers as default values (param: int = Depends(...)
).
When FastAPI was designed, PEP 593 did not exist, and there are several advantages to using PEP 593's Annotated:
- Compatible with other uses of default values, like dataclass'
field
or Pydantic'sField
. - Non-invasive modification of signatures: adding
Marker(...)
inAnnotated
should be ignored by anything exceptdi
. - Functions/classes can be called as normal outside of
di
and the default values (when present) will be used. - Multiple markers can be used. For example, something like
Annotated[T, PydanticField(), Marker()]
.
This last point is important because of the composability it provides:
from typing import TypeVar, Annotated
from di import Marker
from pydantic import Field
T_int = TypeVar("T_int", bound=int)
PositiveInt = Annotated[T_int, Field(ge=0)]
T = TypeVar("T")
Depends = Annotated[T, Marker()]
def foo(v: Depends[PositiveInt[int]]) -> int:
return v
Note how we used type aliases to create stackable, reusable types.
This means that while Annotated
can sometimes be verbose, it can also be made very convenient with type aliases.
Custom types
If you are writing and injecting your own classes, you also have the option of putting the dependency injection metadata into the class itself, via the __di_dependency__(cls) -> Marker
protocol. This obviously doesn't work if you are injecting a 3rd party class you are importing (unless you subclass it).
The main advantage of this method is that the consumers of this class (wich may be your own codebase) don't have to apply markers everywhere or worry about inconsistent scopes (see scopes).
For example, we can tell di
constructing a class asynchronously`:
import inspect
from dataclasses import dataclass
from di import AsyncExecutor, Container, Dependant
class HTTPClient:
pass
@dataclass
class B:
msg: str
@classmethod
def __di_dependency__(cls, param: inspect.Parameter) -> "Dependant[B]":
# note that client is injected by di!
async def func(client: HTTPClient) -> B:
# do an http rquest or something
return B(msg=f"👋 from {param.name}")
return Dependant(func)
async def main() -> None:
def endpoint(b: B) -> str:
return b.msg
container = Container()
executor = AsyncExecutor()
solved = container.solve(Dependant(endpoint), scopes=(None,))
async with container.enter_scope(None) as state:
res = await container.execute_async(solved, executor=executor, state=state)
assert res == "👋 from b"
This allows you to construct your class even if it depends on doing async work and it needs to refer to the class itself.
If you only need to do async work and don't need access to the class, you don't need to use this and can instead just make your field depend on an asynchronous function:
from dataclasses import dataclass
from di import AsyncExecutor, Container, Dependant, Marker
from di.typing import Annotated
async def get_msg() -> str:
# make an http request or something
return "👋"
@dataclass
class B:
msg: Annotated[str, Marker(get_msg)]
async def main() -> None:
def endpoint(b: B) -> str:
return b.msg
container = Container()
executor = AsyncExecutor()
solved = container.solve(Dependant(endpoint), scopes=(None,))
async with container.enter_scope(None) as state:
res = await container.execute_async(solved, executor=executor, state=state)
assert res == "👋"
Another way this is useful is to pre-declare scopes for a class.
For example, you may only want to have one UserRepo
for you entire app:
import inspect
from di import Container, Dependant, SyncExecutor
class UsersRepo:
@classmethod
def __di_dependency__(cls, param: inspect.Parameter) -> "Dependant[UsersRepo]":
return Dependant(UsersRepo, scope="app")
def endpoint(repo: UsersRepo) -> UsersRepo:
return repo
def framework():
container = Container()
solved = container.solve(
Dependant(endpoint, scope="request"), scopes=["app", "request"]
)
executor = SyncExecutor()
with container.enter_scope("app") as app_state:
with container.enter_scope("request", state=app_state) as req_state:
repo1 = container.execute_sync(solved, executor=executor, state=req_state)
with container.enter_scope("request", state=app_state) as req_state:
repo2 = container.execute_sync(solved, executor=executor, state=req_state)
assert repo1 is repo2
InjectableClass
As a convenience, di
provides an InjectableClass
type that you can inherit from so that you can easily pass parameters to Marker
without implementing __di_dependant__
:
from di import Container, Dependant, SyncExecutor
from di.dependant import Injectable
class UsersRepo(Injectable, scope="app"):
pass
def endpoint(repo: UsersRepo) -> UsersRepo:
return repo
def framework():
container = Container()
solved = container.solve(
Dependant(endpoint, scope="request"), scopes=["app", "request"]
)
executor = SyncExecutor()
with container.enter_scope("app") as app_state:
with container.enter_scope("request", state=app_state) as request_state:
repo1 = container.execute_sync(
solved, executor=executor, state=request_state
)
with container.enter_scope("request"):
repo2 = container.execute_sync(
solved, executor=executor, state=request_state
)
assert repo1 is repo2
Binds
Binds, which will be covered in depth in the binds section offer a way of swapping out dependencies imperatively (when you encounter type "X", use function "y" to build it). They can be used with any of the methods described above.
Performance
Reflection (inspecting function signatures for dependencies) is very slow.
For this reason, di
tries to avoid it as much as possible.
The best way to avoid extra introspection is to re-use Solved Dependants.
Conclusion
There are several ways to declare dependencies in di
.
Which one makes sense for each use case depends on several factors, but ultimately they all achieve the same outcome.