# Science System
## Premise
- Keeping this short and concise
- Deconstruction R&D is a 5 minute process with zero variance other than following some other player's meta-list. There's no "science" in science and the entire department is just a complicated powergaming gear printer
- Techwebs was made as a timegate. Recent developments have made it slightly less awful and more engaging but it's still a timegate. A bomb is just a harder deconstruction recipe, deconstructing slime cores and stuff for points is still a minmax speedrun. There's still no science in this.
- Solution:
- Create a hybrid system
- "Busywork" fluffed as restoring data - only taking 5-15 minutes, tops. The less tedious/unnecessary clicking the better
- High-powered gear and unique prototypes locked behind tech tree/discovery system.
## Difficulty & Persistence
- The system will be unified across both servers. One is an action server with rounds lasting 2-3 hours, the other is perma-sexended with 5 hour rounds at the moment and very slow development on situations
- Solution: Configuration toggles for everything
- Every component of science's systems will have configuration values for difficulty and persistence
- Codewise, "default" difficulty will be optimized for main
- ~45 minutes to fully max each area by a single staffer, 60 for endgame, 15-30 for "relatively/somewhat done"
- Things like nanites will let you piece together building blocks telling you what each does so you just need to think of a good combination
- For the RP server, configuration will default to extremely hard + persistent
- All data holders are serializable
- The station's primary research server houses persistent data - this can be sabotaged or lost through damage
- Things like nanites will give zero hints or training wheels - it's expected to take days or weeks to fully figure out functionality
- Techwebs itself will contain extremely difficult data requirements that are not always acquirable in any given round, but partial progress and data itself is stored.
- The system is calibrated on both ends to always give the bare minimum functionality within 10-15 minutes tops to allow science to decide what to pursue in any given round.
## Components
The following are main components of new-science and will have their primary data stored in sub-datums as part of a research server's data holder.
- Fabricators & Calibration - Protolathe, exosuit fabricators, autolathes, etc
- Techwebs - Main science research
- [Nanites](/kdPuc9NnRaOq1VUhnutvkQ) - Anything nanite makers use will be stored in research-system based correlation tables to help them as well.
- [Xenoarch](/oEwDvoQCTciSQ0D6wa4XNg)
- [Catalogue System](/6Eex7UhGQQ6IdHT01Lpy2g)
- Genetics
The following are parts of science not part of the main data-gathering system, but benefit from the data gathered and are useful for acquiring equipment for data collection.
- Telescience
- Xenobiology
- Rigsuits/Mecha (To be split between engineering/robotics)
- Fabrication - **All** lathes count as this, protolathe, exosuit fabricators, autolathes, etc
- [Material Synthesis/System](/czASLc_KTzmwgdmp9QXDnA)
## Data
- /datum/research_data holds master information on all science things
- Stored in research servers.
- Aim: Create a feeling of "qualitative" data (high energy explosives, molecular binding, bluespace stabilization, etc, names), vs generic "points" like some sort of currency.
- For nodes, servers do "correlation runs" to unlock or partially unlock them based on stored data
- Stored data:
- /datum/research_data/?
- Some things like sensors will be more number-based.
- Some things like xenobiology/xenoarch would be more milestones to unlock by scanning a certain thing, and whatnot
- Things like catalogue could have specific categories associated to a list of progressions for when a node calls for x knowledge in y area from it, as well as be stored by ID/typepath for "catalogued" in partial or full, for strict requirements.
- Minimize reliance on requiring a specific amount of data in whatever area (e.g. requiring certain bomb radii)
- If possible, have node unlocks have either/or relations configured in terms of data needed.
### Datums
- /datum/research_store/xenoarcheology - stores information on xenoarch effects and unlocked things
- /datum/research_store/catalogue - stores information on catalogued items, artifacts, data, etc
- /datum/research_store/nanites - stores nanite glyph correlation table for when that system is designed
- /datum/research_store/fabrication - stores fabricator calibration statuses
- /datum/research_store/designs - stores fabrication designs
- /datum/research_store/technology - master techtree
- /datum/research_store/sensor - sensor storage - can be used for potential overmap data gathering, tachyon-doppler bomb results, etc.
### Storage
- All research_store datums are clustered in research_data as a holder
- It's possible to have disks that only store some subsets of data, or subsets of a subset of data
- Things like disks and external fileservers can store these tings too
- Make research stores file based if possible? It'd be neat to have data theft possible via modular computers
- Research mainframes can store all data at once
### Decay
- When persistence is enabled, data slowly corrupts itself
- Results in the same things as deliberate sabotage over time.
### Sabotage
- Intentionally delete research data
- Modify research data directly - sabotage effect correlation tables, decrease design stability/reliability, sabotage fabricator calibrations
### Networks
- When this system is first made it'll be a global science datum
- When networks, either inspired from Nebula or otherwise, are made for modular computers and similar, optimally, research would operate off station networks.
- This allows a full separation between the station, other ships, ghostroles, etc
- Stealing data would be a thing, due to limited data transfer
- Make it very slow to transfer research data across a network so you can't instantly drain a server
### Correlation
- Node unlocks ran through correlation
- Research mainframe just stores data + has a bit of processing power
- Research coprocessing servers expedites this process
- Produces a lot of heat and consumes a lot of energy
- Results in either a full node completion, partial node completion, or full failure
- When getting more data than necessary, this process becomes easier and faster
## Techwebs
- /datum/tech_node
- What you're unlocking by doing correlation runs on data
- A starter technode has most of the station's preliminary technology and designs
- Potentially go full Rimworld and have the starter technodes be started technode**s**, which are very easy to research. Non-station occupants etc would have to complete these. The base node would be even simpler
- **Most of the station's low to mid-game tech are in this.** Why? Keep reading.
### Nodes
- Nodes store:
- Designs
- Initial reliability for designs, if applicable
- Optimization data - some nodes might make certain types of items easier, faster, or more efficient to print
- Some nodes might give small bonuses for all fabricators
- Some might hold arbitrary knowledge or more - all applied to connected machinery on sync.
### R&D console
- R&D console is the primary interface of all of these
### Tree
- Because we're no longer trying to do a dumb timelock, we can have far more of a diverse techtree than before.
- Logical sense still applies, high tier weapons requiring lower tier research as an example, etc, but we no longer need to lock half the game around tier X upgrades.
### Generation
- Procedurally generate the entire techtree.
- Node *contents* are static
- Or are they? Mostly becuase I can't think of a way to make them dynamic
- Give possibilities for how to generate the tech on a given round and the computer picks what you need to do
- Configuration for how easy/hard nodes are
- Configuration for if the player is told exactly what is needed to get a node, vs rough ideas of what kinds of data they need. (irrelevant for RP as persistence will be on)
- If persistence is enabled (irrelevant for main as persistence will be off)
- We're going to assume only one server, the station's primary indestructible snowflake mainframe, is persistent.
- Offmap ships can start with random tech/whatnot on their own server's init processes.
- Data corruption results in a node being regenerated entirely
- Continuity is still plausible as no other server is persistent - if others were, then they'd generate with different requirements
- Generation difficulty likely cranked to hilarious levels in this case
- Separate configuration for MTTH node corruption without sabotage
- Node corruption is ticked on round end + successful persistence subsystem shutdown
- Crashes won't affect science.
- ~~let's be honest I don't care about HRP continuity memes, but this is convenient as it solves that too.~~
## Fabricators
- Fabricators get refactored.
- All of them.
- Autolathes count, imprinters, biogenerators, etc
### Refactoring
- All fabricators can hold all materials.
- Autolathes are limited to metal/glass still because eh.
- All fabricators get the same TGUI interface
- All fabricators run off /datum/design
- `fabricator_type` on /datum/design designates what fabricators can print them
- Designs have build times, fabricators have proper queues.
### Instability
- Support design instability
- Unstable designs might cause Bad Things to Occur if you print them with a fabricator
- In nebula/bay this is the fabricator blowing up. This is unfun, so it'll probably be more deterministic/damage based because an instant RNG blowup is not fun.
### Research Network Connection
- Fabricators can sync data from a R&D console OR from disks to download designs, optimizations, etc.
- Fabricators however no longer require them for printing. Ever. Garbage interfaces are garbage interfaces and they won't be coming back.
### Calibration
- Fabricator calibration data is required for some designs to print well.
- For mid level designs, "decently" affordable printing + decently fast printing requires leveled calibration + tiered upgrades like usual
- In exchange, you can print them basically from roundstart. It's just you are **heavily** incentivized to do some good old deconstruction R&D first.
- Deconstruction R&D is returning, sue me.
- There'll probably be an alternative if you don't want to do it at all but it'll be the fastest and laziest way to get roundstart calibration to an acceptable degree.
### Reliability
- **Items have reliability again. YAY!**
- Items can do bad things (irradiation for AEGs as an example) when unreliable
- /datum/element/reliability
- Reliability goes down (or atleast a "fuckup counter" goes up) every time something bad happens.
- Deconstrut the item to make further prints reliable
- Experimental designs might have random reliability
- **Reliability should be a gradual issue, not a RNG explode-on-use, that's not fun for anyone.**
- High fabricator calibration and high techtree levels giving lots of optimization will automatically offset reliability without needing to do this
## Backend - caching
- most of this making science variable results in most things not being cache-able
- technodes can potentially be cached (yet are procedurally generated, hmm)
- designs - having reliability + other stuff makes it not very cacheable
- research data obviously can't be cached
- offset by R&D consoles **no longer storing research data** - they link directly to research servers
- fabricators just need to store a datum telling them their optimization levels, efficiency/calibrations, etc, and printable designs
- auto-generated from research data using a greater-or-equal reducer on sync
- saves memory this way, just needs to keep references to designs, and maybe reliability overrides/calibration overrides
- makes VV a bit harder if this is the case
- optimally everything utilizing caching would use something like how block/parry data works on main - typepath, id, or instance.
- procedural generation easily works with this
- admin vv can work with this as it'll return the instance directly.
- designs and nodes both need IDs to uniquely identify them to prevent duplicates
- catalogue data etc are all cached, and just have their catalogue state/percentage stored because why would you do it any other way
- all other datas are stored if numerical, or cached if datumized.