Like a Knowledge Graph. Unlike any Knowledge Graph.
E2E application of the Brain Reasoner.
Example
English Question –-> Brain Natural Language Engine (BNLE) –-> KnowledgeQuestion(state = ReasoningState)
What is the population of India?
What is BrainToken("/common/attribute/country/population") of BrainToken("common/entity/country/1")
common/entity/country/1 represents India node in the KG
[What BrainToken("/common/attribute/country/population")] BrainToken("common/entity/country/1")
KnowledgeQuestion –-> Brain Knowledge Repository 2.0 –-> KnowledgeAnswer
1.Action Given a state, where what all we can do?
Generating Path/Action/Query
As the State.GIVEN token as EntityInstance and there is only one instance hence we can reach to country collection's doc/node '1' in the KG. As we are this node and no other GIVEN token is there hence the only option is to read attributes of node 1. This new information is passed to Update State.
One optimization possible is if we can utalized any token with same entityType as a filter, ex in this case ASK.BrainToken if of same entityType hence applicable to be applied as filter
Action = Query(EntityID(1), Attribute(/common/attribute/country/population))
1.1 State.GIVEN.BrainToken == EntityInstance
1.1.1 You can read all the types of attributes (schema) ex. cou ntry as entity type will have population, gdp, area etc
1.1.2 You can read all the attributes values ex. area = BrainQuantity(1234), pop = BrainQuantity(345)
1.1.3 You can read all predicate types ex. country can have [has_president, belongs_to]
1.1.4 You can move to take a path using a predicate. ex. county has_president and reach to person node
1.1.5 You can read all qualifiers
1.2 State.GIVEN.BrainToken == EntityType
1.2.1 You can read all the types of attributes (schema) ex. country as entity type will have population, gdp, area etc
1.2.2 You can fetch all subclasses
1.2.3 You can list all instances of the entity type. ex, all country instances
1.2.4 You can read all predicate types ex. country can have [has_president, belongs_to]
1.2.5 You can read all qualifier schema
1.3 State.GIVEN.BrainToken == PredicateType
2.Update State Generate a reward state
Input:
Action = Query(EntityID(1), Attribute(/common/attribute/country/population))
Query Response:
/common/attribute/country/
/common/attribute/country/1
/common/attribute/country/population
BrainQuantity("1234")
Reward State = {State + Action}
3.Stop Mechanims: Is the given State is a GOAL State ?
Below state will become the Goal State, as the no action to take and ask is full_filled.
KnowledgeAnswer(state= GoalState)
KnowledgeAnswer –-> Brain NLP Synthesis Engine (BNSE) –-> English Answer
3.1 Template Classification apporach
Question : What is the BrainToken(X) of BrainToken(Y)?
Answer Template: BrainToken(X) of BrainToken(Y) is BrainToken(X.Value)
3.2 Template value replacement
BrainToken("/common/attribute/country/population") of BrainToken("common/entity/country/1") is BrainToken(BQ("/common/attribute/country/population/1234"))
3.3 Converting to natual english (DL+Bert)
Population of India is 138 Crores
In this document scope, we will focus on factoid question first.
What is BrainToken(Attribute) of BrainToken(EntityInstance)?
Can we answer above with State Thinking? Yes
Step:1
Approach: Search problem that grows the graph through staged state-actions
A query graph should have exactly one lambda variable to denote the answer, at least one grounded entity, and zero or more existential variables and aggregation functions.
Query graph generation, formulated as a search problem with staged states and actions. Each state is a candidate parse in the query graph representation and each action defines a way to grow the graph.
Who was U.S. president after the Civil War started
For each detected entity s, we treat it as a subject constant vertex. Based on the KB, for each unique KB ‘path’ from s, where a KB ‘path’ means one hop predicate p0 or two hop predicates p1-p2, we construct a basic query graph hs, p0, xi or hs, p1-ycvt-p2, xi. ycvt and x are variable vertices, and x denotes the answer. For example, the basic query graph B in Figure 2 can be represented as hUnited States, officials-y0-holder, xi.
We use similarity scores of different CNN models described in Sec. 3.2.1 to measure the quality of the core inferential chain
These two CNN models are learned using pairs of the question and the inferential chain of the parse in the training data
References
Who was U.S. president after the Civil War started?
BrainTokenization
Steps:
Tokenize
remove stop words
Identify BrainTokens
entityType
predicateType
AttributeType
entityInstance
predicateInstance
AttributeInstance
Temporal Identification
standard lib
Bag = []
What/When/Which/Where/Who & How [W5H1] - film = Y^ = Looking for instance of Y^
Basic Dependeny Graph Generation
/common/actor/1 ==> Amitabh Bachhan
(star_in)
(context : starred by Amitabh bachhan) - ngram - AnnotationService
[] –> star_in –> (Y1) –-> directed_by –> X1
–> produced_by –> X2
–> genre –> X3
Y2
Y3
[/common/director/1] –> instanceOf (Y1^, X1^, X2^, X3^) – > X1
[/common/director/1] –> valueCheck (X1[1…n]) – > X1[k]
[/common/entity/flim] –> typeCheck [Y1^, X1^, …] –> Y1^
Add Constraints
Temporal
1. Implicit
// TODO :
2. Explicit
[https://stanfordnlp.github.io/CoreNLP/ner.html]
after year 2000 [Before/After/Since/From]
film -- year attributeType
AttributeType[/common/attribute/film/year]
BrainFilter
/common/attribute/film/year > 2000
screened after 2000
screened >> flim.year_of_release
result : /common/attribute/film/year > 2000
brainNameService : screen
3. Ordinal
richest, poorest, fasted, maximum , first
maximum, richest - maxAtN
poor, cheepest - minAtN
max[/common/attribute/film/revenue]
min[/common/attribute/film/revenue]
[A1] –> star_in [Y1, Y2] –> directed_by [X1]
Answer : Y1, Y2
Dependency Graph >> Query >> ArangoDB
for f in films
f.outbound actor
actor.id = actor/1
&& f.outbound director
director.id = director/1
BrainExpression where
BrainToken 'op' LITERAL
/common/attribute/film/year > 2000
max[BrainToken]
max[/common/attribute/film/revenue]
DSL
DSL
Looking for an instance of Film Entity which is connect to Director and Actor via relationship and that also director/1 and actor/1
message State {
repreated entity_type
repreated entity
repreated predicate_type
}
message BaseGraph {
subject // actor/1 [Actor], context
string relation // star_in , context
repeated string possible_two_hop_options/path/query, [director. .....]
map<type, string> context
}
message Constraint {
// /common/attribute/film/year > 2000
repeated BrainExpression = 1;
//
}
message AnswerState {
//KNOWN vs UNKNOW
repeated brainToken // instance of flim, flim,
??
}
message DependecyGraph {
AnswerState
BaseGraph base_grap;
State
Constraint
}
for f in films
f.outbound actor
actor.id = actor/1
&& f.outbound director
director.id = director/1
FILTER
f.year > 2000
for ....
...