# S3 on outposts issues
This article is used to describe the issue we've encountered during the integration of hivemetastore, trino, and s3 on outposts
## CASE1: Specified s3 endpoint to `https://s3-outposts.ap-northeast-1.amazonaws.com`
In this case, when we set the s3 endpoint to `https://s3-outposts.ap-northeast-1.amazonaws.com`, we can see the debug message generated from hivemetastore from [another note](https://hackmd.io/@BochengYang/HJpMQxR7n)
In conclusion of this debug and error message, it looks like this is an unsupported operation
## CASE2: Specified s3 endpoint to `https://ipass-datala-o01427051a3dc18b69nmaa8betpfvhubhf2siqapn10--op-s3.op-01427051a3dc18b69.s3-outposts.ap-northeast-1.amazonaws.com`
When we use another endpoint that AWS metioned, the error message looks like sending a wrong parameter to the endpoint, the detail of the debug and error message [in this note](https://hackmd.io/@BochengYang/r1pVmgCm2)
## CASE3: Specified s3 endpoint to `https://op-01427051a3dc18b69.s3-outposts.ap-northeast-1.amazonaws.com`
Another try is set s3 endpoint to `https://op-01427051a3dc18b69.s3-outposts.ap-northeast-1.amazonaws.com`, but it still won't work, the error message is record [in this note](https://hackmd.io/@BochengYang/SJ18Csl4h)
## Additional Information
### core-site.xml
```xml=
<configuration>
<property>
<name>fs.s3a.impl</name>
<value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>
</property>
<property>
<name>fs.s3a.connection.ssl.enabled</name>
<value>true</value>
</property>
<property>
<name>fs.s3a.endpoint</name>
<value>https://ipass-datala-o01427051a3dc18b69nmaa8betpfvhubhf2siqapn10--op-s3.op-01427051a3dc18b69.s3-outposts.ap-northeast-1.amazonaws.com</value>
</property>
<property>
<name>fs.s3a.access.key</name>
<value>____________</value>
</property>
<property>
<name>fs.s3a.secret.key</name>
<value>_______________</value>
</property>
<property>
<name>fs.s3a.path.style.access</name>
<value>true</value>
</property>
<property>
<name>fs.s3a.signing-algorithm</name>
<value>S3SignerType</value>
</property>
<!-- https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html#Storage_Classes -->
<!-- https://aws.amazon.com/s3/storage-classes/ -->
<property>
<name>fs.s3a.create.storage.class</name>
<value>outposts</value>
</property>
</configuration>
```
### hive-site.xml
```bash=
<configuration>
<!--Postgres metastore connection details (stores info about tables etc.)-->
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:postgresql://lighthouse-outpost-dev-hivemetastoredb.cm25znb924dp.ap-northeast-1.rds.amazonaws.com:5432/hivemetastoredb?allowPublicKeyRetrieval=true&useSSL=false&serverTimezone=UTC</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>org.postgresql.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>postgres</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>_____________________</value>
</property>
<property>
<name>metastore.thrift.uris</name>
<value>thrift://hivemetastore-test-s3-outposts.hivemetastore.svc.cluster.local:9083</value>
<description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
</property>
<!-- <property>
<name>metastore.task.threads.always</name>
<value>org.apache.hadoop.hive.metastore.events.EventCleanerTask,org.apache.hadoop.hive.metastore.MaterializationsCacheCleanerTask</value>
</property> -->
<property>
<name>metastore.task.threads.always</name>
<value>org.apache.hadoop.hive.metastore.events.EventCleanerTask</value>
</property>
<property>
<name>metastore.expression.proxy</name>
<value>org.apache.hadoop.hive.metastore.DefaultPartitionExpressionProxy</value>
</property>
<property>
<name>metastore.warehouse.dir</name>
<value>s3a://ipass-datala-o01427051a3dc18b69nmaa8betpfvhubhf2siqapn10--op-s3</value>
</property>
<property>
<name>metastore.log4j.file</name>
<value>/opt/hive/conf/hive-log4j2.properties</value>
</property>
</configuration>
```
### hive catalog (Trino)
```
connector.name=hive
hive.metastore.uri=thrift://hivemetastore-test-s3-outposts.hivemetastore.svc.cluster.local:9083
#hive.max-partitions-per-scan=1000000
hive.s3.endpoint=https://ipass-datala-o01427051a3dc18b69nmaa8betpfvhubhf2siqapn10--op-s3.op-01427051a3dc18b69.s3-outposts.ap-northeast-1.amazonaws.com
hive.s3.path-style-access=true
hive.s3.ssl.enabled=true
hive.s3.max-connections=100
hive.s3.aws-access-key=________________
hive.s3.aws-secret-key=____________________________________
hive.allow-drop-table=true
hive.allow-add-column=true
hive.allow-drop-column=true
hive.allow-rename-table=true
hive.allow-rename-column=true
hive.metastore-timeout=300s
hive.storage-format=PARQUET
```