# `core.py` Cheat Sheet!
This document can serve as a quick cheat sheet for the various moving classes throughout the `beras` Module!
## Classes in `core.py`:
### `Tensor` (line 7)
Essentially, a NumPy Array that can also be marked as trainable.
An instance of the `Tensor` class will be a NumPy array, but you can access the `.trainable` boolean value to check if the array is trainable or not.
### `Variable` (line 25)
an alias for the `Tensor` class
### `Callable` (line 29)
Say you have an instance `layer1` of class `Layer` which extends `Callable`. Calling `layer1(argument)` will be the same as calling `layer1.call(argument)`
#### Methods:
* `__call__(self, *args, **kwargs) -> Tensor:`
Calls the `call()` method with `*args` and `**kwargs**` as arguments. Then, casts output to `Tensor` class and returns.
* `call(self, *args, **kwargs)`
Abstract method
### `Weighted` (line 48)
Abstract class meant to represent any Module that has inherent weights that can be trained.
For instance, a Linear/Dense layer has a weight and bias, and thus should subclass `Weighted`
#### Methods:
* `weights(self) -> list[Tensor]`
Abstract method. Intended for the subclass to return the instance's weights.
e.g. a Linear layer's `weights` method should return that layer's weight and bias.
* `trainable_variables(self) -> list[Tensor]:`
Returns a list of all weights which are trainable
* `non_trainable_variables(self) -> list[Tensor]:`
Returns a list of all weights which are _not_ trainable
* `trainable(self) -> bool:`
Returns `True` if there are any trainable `Tensor`s. Returns `False` otherwise.
* `trainable(self, value: bool):`
Sets alll trainable weights to `value`.
Mainly will be used to denote Weighted things that should not be trained (by calling `trainable(value=False)`)
### `Diffable` (line 79)
`Diffable` subclasses `Callable` and `Weighted`. This means that `Diffable` things should have a call function and weights of some kind.
For any class (say `DiffableThing`) which extends `Diffable`, using the `__call__` method will build up a lot of useful instance variables based on the `DiffableThing`'s `call` function.
That is to say, say we have `thingInstance = DiffableThing()`. Using `thingInstance(arguments)` will populate `thingInstance`'s instance variables based on the `thingInstance.call` method definition.
#### Class Variables
* `gradient_tape`
You don't need to worry about interfacing with this variable yourself. However, this effectively keeps track of whether there is a GradientTape that is keeping track of `Diffable` operations or not.
You'll get very used to seeing something like:
```python
with GradientTape as tape:
#do some diffable things
#do some more diffable things
#do some non-diffable things
```
This basically creates a GradientTape to keep track of `Diffable` operations when in the scope of that tape (e.g. lines 2-3). You exit the scope (like exiting an `if` statement) by leaving its indentation level (e.g. `tape` will not record any operations starting from line 4).
If the `Diffable` class has a `gradient_tape` that is not `None`, then all instances of classes which extend `Diffable` will record their operations to that `gradient_tape`.
#### Instance Variables
* `argnames`
List of the names of all arguments to the `call` function which are not keyword arguments.
You probably won't need to use this
* `input_dict`
A dictionary which associates all arguments to the `call` function by storing: {`arg_name`: `arg_value`}
* `inputs`
A list which contains the values passed in as arguments to the `call` function.
* `outputs`
A list which contains all values returned by the `call` function.
#### Methods
* `get_input_gradients(self) -> list[np.ndarray]:`
Abstract method. Returns list of gradients with respect to inputs.
* `get_weight_gradients(self) -> list[np.ndarray]:`
Abstract method. Returns list of gradients with respect to weights.
* `compose_input_gradients(self, J=None):`
Composes (multiplies) the inputted cumulative jacobian matrix (a matrix associating partial derivatives for output variables with respect to input variables) with the input jacobian matrix for the layer.
* `compose_weight_gradients(self, J=None) -> list:`
Composes (multiplies) the inputted cumulative jacobian matrix with the weight jacobian for the layer.