# Report on Hydrosphere
## What could be improved in hydrosphere:
* Show what files are loaded in this version of model, code of func_main.py, what was installed, time of model assembling and installing
* Too slow model assembling (have to wait ~15 mins)
* Looks like TensorProto dtype doesn’t affect anything:

## Documentation issues:
* broken links in hs CLI, resources/models sections:
`([you should]({{site.baseurl}}{%link how-to/write-manifests.md%}).) `
* No examples of usage of this functions. So it is still not clear what it is and how to use it. Add examples.

* quickstart: 9090 port bug in hs
* Redundant folder (‘linear_regression/model.h5 -> model.h5’, which makes user confused about the structure of the folder.

* Grpc request code represented in dev and quickstart are incorrect:
What is shown:
```
import grpc
import hydro_serving_grpc as hs
channel = grpc.insecure_channel("localhost")
stub = hs.PredictionServiceStub(channel)
model_spec = hs.ModelSpec(name="face_detector_ssd", signature_name="infer")
tensor_shape = hs.TensorShapeProto(dim=[hs.TensorShapeProto.Dim(size=dim_size)])
tensor = hs.TensorProto(dtype=hs.DT_INT8, tensor_shape=tensor_shape, int_val=%input value% )
request = hs.PredictRequest(model_spec=model_spec, inputs={" %input key% ": tensor})
result = stub.Predict(request)
```
What works:
```
import grpc
import numpy as np
import hydro_serving_grpc as hs
from grpc import ssl_channel_credentials
# connect to your ML Lamba instance
channel = grpc.secure_channel("dev.k8s.hydrosphere.io", credentials=ssl_channel_credentials())
stub = hs.PredictionServiceStub(channel)
# 1. define a model, that you'll use
model_spec = hs.ModelSpec(name="NGt2", signature_name="infer")
# 2. define tensor_shape for Tensor instance
tensor_shape = hs.TensorShapeProto(dim=[hs.TensorShapeProto.Dim(size=1), hs.TensorShapeProto.Dim(size=300),hs.TensorShapeProto.Dim(size=3),hs.TensorShapeProto.Dim(size=3)])
# 3. define tensor with needed data
val = np.load('test.npy').astype(np.float32)
val = val.flatten()
tensor = hs.TensorProto(dtype=hs.DT_DOUBLE, tensor_shape=tensor_shape, double_val=val)
# 4. create PredictRequest instance
request = hs.PredictRequest(model_spec=model_spec, inputs={"x": tensor})
# call Predict method
result = stub.Predict(request)
```
* No examples of how to use metrics. (Autoencoder - hypothesis, and how to add it)
* No documentation on how to create servable (from python hs api).
* No documentation on all contract types, their connection with tensorproto types. When to use scalar, how to send strings
* No documentation on what is model replay and how to do it.
* No documentation on how to get value of the result. Documentations ends on how to receive result from application. No documentation on how to cast tensorproto data to numpy with preserving same dtype as tensorproto.

## Proposal:
1. Add subsections to documentation:
- contracts:list of types, profiles, possible shapes
- tensorproto: list of types, mapping to contract types, numerical values of types, examples on how to get values.
2. hs CLI examples (hs apply, hs profile push)
3. How to add metrics, load data.
4. Add about metadata usage
5. Add info about model replay, monitoring.
6. Add more info on Gateway, Manager (more than 1 sentence maybe?).
7. Embed python notebooks with examples from hydra-serving-example + fix repo
8. Add spark models