Ok, we applied a new resource limit, and kube is no longer evicting the alphas. Now, we are seeing restarts.
alpha-1
kubectl describe pod/dgraph-dev-dgraph-alpha-1
Name: dgraph-dev-dgraph-alpha-1
Namespace: dev
Priority: 0
Node: ip-10-145-60-127.ec2.internal/10.145.60.127
Start Time: Fri, 05 Jun 2020 15:10:25 -0700
Labels: app=dgraph
chart=dgraph-0.0.4
component=alpha
controller-revision-hash=dgraph-dev-dgraph-alpha-999655658
release=dgraph-dev
statefulset.kubernetes.io/pod-name=dgraph-dev-dgraph-alpha-1
Annotations: kubernetes.io/psp: eks.privileged
prometheus.io/path: /debug/prometheus_metrics
prometheus.io/port: 8080
prometheus.io/scrape: true
Status: Running
IP: 10.145.48.133
Controlled By: StatefulSet/dgraph-dev-dgraph-alpha
Containers:
dgraph-dev-dgraph-alpha:
Container ID: docker://e1be335d428c1d8ab23c89e5db598c8066fed449268faf79815199a543223318
Image: docker.io/dgraph/dgraph:v20.03.3
Image ID: docker-pullable://dgraph/dgraph@sha256:1497b8eda8141857906a9b1412615f457e6a92fbf645276a9b5813fbf3342f19
Ports: 7080/TCP, 8080/TCP, 9080/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Command:
bash
-c
set -ex
dgraph alpha --my=$(hostname -f):7080 --lru_mb 2048 --zero dgraph-dev-dgraph-zero-0.dgraph-dev-dgraph-zero-headless.${POD_NAMESPACE}.svc.cluster.local:5080
State: Terminated
Reason: OOMKilled
Exit Code: 137
Started: Fri, 05 Jun 2020 15:29:26 -0700
Finished: Fri, 05 Jun 2020 15:33:12 -0700
Last State: Terminated
Reason: OOMKilled
Exit Code: 137
Started: Fri, 05 Jun 2020 15:22:54 -0700
Finished: Fri, 05 Jun 2020 15:27:52 -0700
Ready: False
Restart Count: 5
Limits:
memory: 7Gi
Requests:
cpu: 2
memory: 4Gi
Environment:
POD_NAMESPACE: dev (v1:metadata.namespace)
Mounts:
/dgraph from datadir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zkmwp (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
datadir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: datadir-dgraph-dev-dgraph-alpha-1
ReadOnly: false
default-token-zkmwp:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-zkmwp
Optional: false
QoS Class: Burstable
Node-Selectors: role=primary
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NotTriggerScaleUp 27m (x2 over 27m) cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added): 1 node(s) had no available volume zone, 1 max limit reached, 2 node(s) had taints that the pod didn't tolerate
Normal NotTriggerScaleUp 25m (x5 over 26m) cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added): 1 max limit reached, 2 node(s) had taints that the pod didn't tolerate, 1 node(s) had no available volume zone
Normal NotTriggerScaleUp 23m (x3 over 27m) cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added): 1 node(s) had no available volume zone, 2 node(s) had taints that the pod didn't tolerate, 1 max limit reached
Normal NotTriggerScaleUp 23m (x15 over 27m) cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added): 2 node(s) had taints that the pod didn't tolerate, 1 node(s) had no available volume zone, 1 max limit reached
Warning FailedScheduling 23m (x5 over 27m) default-scheduler 0/5 nodes are available: 1 node(s) didn't match node selector, 2 Insufficient cpu, 2 node(s) had taints that the pod didn't tolerate.
Normal Scheduled 23m default-scheduler Successfully assigned dev/dgraph-dev-dgraph-alpha-1 to ip-10-145-60-127.ec2.internal
Normal SuccessfulAttachVolume 23m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-3c5dad2c-bc2d-4f64-9c74-9d6310d055d3"
Normal Pulled 10m (x5 over 22m) kubelet, ip-10-145-60-127.ec2.internal Container image "docker.io/dgraph/dgraph:v20.03.3" already present on machine
Normal Created 10m (x5 over 22m) kubelet, ip-10-145-60-127.ec2.internal Created container dgraph-dev-dgraph-alpha
Normal Started 10m (x5 over 22m) kubelet, ip-10-145-60-127.ec2.internal Started container dgraph-dev-dgraph-alpha
Warning BackOff 14s (x14 over 17m) kubelet, ip-10-145-60-127.ec2.internal Back-off restarting failed container
zero-1
Name: dgraph-dev-dgraph-zero-1
Namespace: dev
Priority: 0
Node: ip-10-145-60-127.ec2.internal/10.145.60.127
Start Time: Fri, 05 Jun 2020 12:22:46 -0700
Labels: app=dgraph
chart=dgraph-0.0.4
component=zero
controller-revision-hash=dgraph-dev-dgraph-zero-58b4c564bc
release=dgraph-dev
statefulset.kubernetes.io/pod-name=dgraph-dev-dgraph-zero-1
Annotations: kubernetes.io/psp: eks.privileged
prometheus.io/path: /debug/prometheus_metrics
prometheus.io/port: 6080
prometheus.io/scrape: true
Status: Running
IP: 10.145.33.6
Controlled By: StatefulSet/dgraph-dev-dgraph-zero
Containers:
dgraph-dev-dgraph-zero:
Container ID: docker://5ba1f21307c1b6c5591583d2cf3e85bdf1788b9f07a77cebc4a6ff83ca661707
Image: docker.io/dgraph/dgraph:v20.03.3
Image ID: docker-pullable://dgraph/dgraph@sha256:1497b8eda8141857906a9b1412615f457e6a92fbf645276a9b5813fbf3342f19
Ports: 5080/TCP, 6080/TCP
Host Ports: 0/TCP, 0/TCP
Command:
bash
-c
set -ex
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
idx=$(($ordinal + 1))
if [[ $ordinal -eq 0 ]]; then
exec dgraph zero --my=$(hostname -f):5080 --idx $idx --replicas 5
else
exec dgraph zero --my=$(hostname -f):5080 --peer dgraph-dev-dgraph-zero-0.dgraph-dev-dgraph-zero-headless.${POD_NAMESPACE}.svc.cluster.local:5080 --idx $idx --replicas 5
fi
State: Running
Started: Fri, 05 Jun 2020 12:22:54 -0700
Ready: True
Restart Count: 0
Requests:
memory: 100Mi
Environment:
POD_NAMESPACE: dev (v1:metadata.namespace)
Mounts:
/dgraph from datadir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zkmwp (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
datadir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: datadir-dgraph-dev-dgraph-zero-1
ReadOnly: false
default-token-zkmwp:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-zkmwp
Optional: false
QoS Class: Burstable
Node-Selectors: role=primary
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
alpha stateful set:
Name: dgraph-dev-dgraph-alpha
Namespace: dev
CreationTimestamp: Thu, 04 Jun 2020 13:48:30 -0700
Selector: app=dgraph,chart=dgraph-0.0.4,component=alpha,release=dgraph-dev
Labels: app=dgraph
chart=dgraph-0.0.4
component=alpha
heritage=Tiller
release=dgraph-dev
Annotations: <none>
Replicas: 3 desired | 3 total
Update Strategy: RollingUpdate
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=dgraph
chart=dgraph-0.0.4
component=alpha
release=dgraph-dev
Annotations: prometheus.io/path: /debug/prometheus_metrics
prometheus.io/port: 8080
prometheus.io/scrape: true
Containers:
dgraph-dev-dgraph-alpha:
Image: docker.io/dgraph/dgraph:v20.03.3
Ports: 7080/TCP, 8080/TCP, 9080/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Command:
bash
-c
set -ex
dgraph alpha --my=$(hostname -f):7080 --lru_mb 2048 --zero dgraph-dev-dgraph-zero-0.dgraph-dev-dgraph-zero-headless.${POD_NAMESPACE}.svc.cluster.local:5080
Limits:
memory: 7Gi
Requests:
cpu: 2
memory: 4Gi
Environment:
POD_NAMESPACE: (v1:metadata.namespace)
Mounts:
/dgraph from datadir (rw)
Volumes:
datadir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: datadir
ReadOnly: false
Volume Claims:
Name: datadir
StorageClass: io1-fast-retain
Labels: <none>
Annotations: volume.alpha.kubernetes.io/storage-class=anything
Capacity: 40Gi
Access Modes: [ReadWriteOnce]
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning RecreatingFailedPod 56m (x145 over 22h) statefulset-controller StatefulSet dev/dgraph-dev-dgraph-alpha is recreating failed Pod dgraph-dev-dgraph-alpha-0
Normal SuccessfulDelete 45m (x194 over 23h) statefulset-controller delete Pod dgraph-dev-dgraph-alpha-1 in StatefulSet dgraph-dev-dgraph-alpha successful
Normal SuccessfulDelete 40m (x8 over 23h) statefulset-controller delete Pod dgraph-dev-dgraph-alpha-2 in StatefulSet dgraph-dev-dgraph-alpha successful
Warning RecreatingFailedPod 34m (x194 over 23h) statefulset-controller StatefulSet dev/dgraph-dev-dgraph-alpha is recreating failed Pod dgraph-dev-dgraph-alpha-1
Normal SuccessfulDelete 28m (x149 over 22h) statefulset-controller delete Pod dgraph-dev-dgraph-alpha-0 in StatefulSet dgraph-dev-dgraph-alpha successful
Normal SuccessfulCreate 23m (x142 over 25h) statefulset-controller create Pod dgraph-dev-dgraph-alpha-0 in StatefulSet dgraph-dev-dgraph-alpha successful
crash-looping alpha-1 logs:
++ hostname -f
+ dgraph alpha --my=dgraph-dev-dgraph-alpha-1.dgraph-dev-dgraph-alpha-headless.dev.svc.cluster.local:7080 --lru_mb 2048 --zero dgraph-dev-dgraph-zero-0.dgraph-dev-dgraph-zero-headless.dev.svc.cluster.local:5080
[Decoder]: Using assembly version of decoder
[Sentry] 2020/06/05 22:35:58 Integration installed: ContextifyFrames
[Sentry] 2020/06/05 22:35:58 Integration installed: Environment
[Sentry] 2020/06/05 22:35:58 Integration installed: Modules
[Sentry] 2020/06/05 22:35:58 Integration installed: IgnoreErrors
[Decoder]: Using assembly version of decoder
[Sentry] 2020/06/05 22:35:58 Integration installed: ContextifyFrames
[Sentry] 2020/06/05 22:35:58 Integration installed: Environment
[Sentry] 2020/06/05 22:35:58 Integration installed: Modules
[Sentry] 2020/06/05 22:35:58 Integration installed: IgnoreErrors
I0605 22:35:58.417064 17 init.go:99]
Dgraph version : v20.03.3
Dgraph SHA-256 : 08424035910be6b6720570427948bab8352a0b5a6d59a0d20c3ec5ed29533121
Commit SHA-1 : fa3c19120
Commit timestamp : 2020-06-02 16:47:25 -0700
Branch : HEAD
Go version : go1.14.1
For Dgraph official documentation, visit https://docs.dgraph.io.
For discussions about Dgraph , visit http://discuss.hypermode.com.
To say hi to the community , visit https://dgraph.slack.com.
Licensed variously under the Apache Public License 2.0 and Dgraph Community License.
Copyright 2015-2020 Dgraph Labs, Inc.
I0605 22:35:58.417546 17 run.go:608] x.Config: {PortOffset:0 QueryEdgeLimit:1000000 NormalizeNodeLimit:10000}
I0605 22:35:58.417582 17 run.go:609] x.WorkerConfig: {ExportPath:export NumPendingProposals:256 Tracing:0.01 MyAddr:dgraph-dev-dgraph-alpha-1.dgraph-dev-dgraph-alpha-headless.dev.svc.cluster.local:7080 ZeroAddr:[dgraph-dev-dgraph-zero-0.dgraph-dev-dgraph-zero-headless.dev.svc.cluster.local:5080] RaftId:0 WhiteListedIPRanges:[] MaxRetries:-1 StrictMutations:false AclEnabled:false AbortOlderThan:5m0s SnapshotAfter:10000 ProposedGroupId:0 StartTime:2020-06-05 22:35:58.133536044 +0000 UTC m=+0.013440182 LudicrousMode:false BadgerKeyFile:}
I0605 22:35:58.417623 17 run.go:610] worker.Config: {PostingDir:p BadgerTables:mmap BadgerVlog:mmap BadgerKeyFile: BadgerCompressionLevel:3 WALDir:w MutationsMode:0 AuthToken: AllottedMemory:2048 HmacSecret:**** AccessJwtTtl:0s RefreshJwtTtl:0s AclRefreshInterval:0s}
I0605 22:35:58.417693 17 server_state.go:75] Setting Badger Compression Level: 3
I0605 22:35:58.417712 17 server_state.go:84] Setting Badger table load option: mmap
I0605 22:35:58.417717 17 server_state.go:96] Setting Badger value log load option: mmap
I0605 22:35:58.417723 17 server_state.go:141] Opening write-ahead log BadgerDB with options: {Dir:w ValueDir:w SyncWrites:false TableLoadingMode:1 ValueLogLoadingMode:2 NumVersionsToKeep:1 ReadOnly:false Truncate:true Logger:0x28325d0 Compression:2 InMemory:false MaxTableSize:67108864 LevelSizeMultiplier:10 MaxLevels:7 ValueThreshold:1048576 NumMemtables:5 BlockSize:4096 BloomFalsePositive:0.01 KeepL0InMemory:true MaxCacheSize:10485760 MaxBfCacheSize:0 LoadBloomsOnOpen:false NumLevelZeroTables:5 NumLevelZeroTablesStall:10 LevelOneSize:268435456 ValueLogFileSize:1073741823 ValueLogMaxEntries:10000 NumCompactors:2 CompactL0OnClose:true LogRotatesToFlush:2 ZSTDCompressionLevel:3 VerifyValueChecksum:false EncryptionKey:[] EncryptionKeyRotationDuration:240h0m0s BypassLockGuard:false ChecksumVerificationMode:0 KeepBlockIndicesInCache:false KeepBlocksInCache:false managedTxns:false maxBatchCount:0 maxBatchSize:0}
I0605 22:35:58.512423 17 log.go:34] All 3 tables opened in 87ms
I0605 22:35:58.675073 17 log.go:34] Replaying file id: 1320 at offset: 8242372
I0605 22:35:58.675631 17 log.go:34] Replay took: 522.537µs
I0605 22:35:58.676094 17 log.go:34] Replaying file id: 1321 at offset: 0
I0605 22:35:58.737192 17 log.go:34] Replay took: 61.072561ms
I0605 22:35:58.737710 17 log.go:34] Replaying file id: 1322 at offset: 0
I0605 22:35:58.807857 17 log.go:34] Replay took: 70.119278ms
I0605 22:35:58.808547 17 log.go:34] Replaying file id: 1323 at offset: 0
I0605 22:35:58.868399 17 log.go:34] Replay took: 59.813756ms
I0605 22:35:58.868984 17 log.go:34] Replaying file id: 1324 at offset: 0
I0605 22:35:58.919768 17 log.go:34] Replay took: 50.757499ms
I0605 22:35:58.920311 17 log.go:34] Replaying file id: 1325 at offset: 0
I0605 22:35:58.973864 17 log.go:34] Replay took: 53.529235ms
I0605 22:35:58.974393 17 log.go:34] Replaying file id: 1326 at offset: 0
I0605 22:35:59.112355 17 log.go:34] Replay took: 137.920994ms
I0605 22:35:59.112676 17 server_state.go:75] Setting Badger Compression Level: 3
I0605 22:35:59.112691 17 server_state.go:84] Setting Badger table load option: mmap
I0605 22:35:59.112697 17 server_state.go:96] Setting Badger value log load option: mmap
I0605 22:35:59.112707 17 server_state.go:164] Opening postings BadgerDB with options: {Dir:p ValueDir:p SyncWrites:false TableLoadingMode:2 ValueLogLoadingMode:2 NumVersionsToKeep:2147483647 ReadOnly:false Truncate:true Logger:0x28325d0 Compression:2 InMemory:false MaxTableSize:67108864 LevelSizeMultiplier:10 MaxLevels:7 ValueThreshold:1024 NumMemtables:5 BlockSize:4096 BloomFalsePositive:0.01 KeepL0InMemory:true MaxCacheSize:1073741824 MaxBfCacheSize:0 LoadBloomsOnOpen:false NumLevelZeroTables:5 NumLevelZeroTablesStall:10 LevelOneSize:268435456 ValueLogFileSize:1073741823 ValueLogMaxEntries:1000000 NumCompactors:2 CompactL0OnClose:true LogRotatesToFlush:2 ZSTDCompressionLevel:3 VerifyValueChecksum:false EncryptionKey:[] EncryptionKeyRotationDuration:240h0m0s BypassLockGuard:false ChecksumVerificationMode:0 KeepBlockIndicesInCache:true KeepBlocksInCache:true managedTxns:false maxBatchCount:0 maxBatchSize:0}
I0605 22:36:00.048454 17 log.go:34] All 91 tables opened in 909ms
I0605 22:36:00.063875 17 log.go:34] Replaying file id: 23 at offset: 0
I0605 22:36:01.315087 17 log.go:34] Got compaction priority: {level:1 score:1.2208955474197865 dropPrefix:[]}
I0605 22:36:01.315258 17 log.go:34] Running for level: 1
I0605 22:36:01.595335 17 log.go:34] Got compaction priority: {level:1 score:1.1243503205478191 dropPrefix:[]}
I0605 22:36:01.595616 17 log.go:34] Running for level: 1
I0605 22:36:08.627094 17 log.go:34] LOG Compact 1->2, del 5 tables, add 4 tables, took 7.311798658s
I0605 22:36:08.627315 17 log.go:34] Compaction for level: 1 DONE
I0605 22:36:08.627400 17 log.go:34] Got compaction priority: {level:1 score:1.0796349346637726 dropPrefix:[]}
I0605 22:36:08.627552 17 log.go:34] Running for level: 1
I0605 22:36:11.573709 17 log.go:34] Replay took: 11.509798048s
I0605 22:36:11.575815 17 log.go:34] Replaying file id: 25 at offset: 0
I0605 22:36:13.977067 17 log.go:34] LOG Compact 1->2, del 7 tables, add 7 tables, took 12.381274342s
I0605 22:36:13.977216 17 log.go:34] Compaction for level: 1 DONE
I0605 22:36:16.903854 17 log.go:34] Replay took: 5.327982588s
I0605 22:36:16.904719 17 log.go:34] Replaying file id: 26 at offset: 0
I0605 22:36:17.066554 17 log.go:34] LOG Compact 1->2, del 6 tables, add 5 tables, took 8.43892995s
I0605 22:36:17.066619 17 log.go:34] Compaction for level: 1 DONE
I0605 22:36:17.066825 17 log.go:34] Got compaction priority: {level:0 score:1 dropPrefix:[]}
I0605 22:36:17.066857 17 log.go:34] Running for level: 0
I0605 22:36:24.540751 17 log.go:34] Replay took: 7.636001835s
I0605 22:36:24.546334 17 log.go:34] Replaying file id: 28 at offset: 0
I0605 22:36:29.129834 17 log.go:34] Replay took: 4.583456501s
I0605 22:36:29.130446 17 log.go:34] Replaying file id: 29 at offset: 0
I0605 22:36:33.364181 17 log.go:34] LOG Compact 0->1, del 15 tables, add 11 tables, took 16.297298796s
I0605 22:36:33.364247 17 log.go:34] Compaction for level: 0 DONE
I0605 22:36:33.364282 17 log.go:34] Got compaction priority: {level:0 score:1 dropPrefix:[]}
I0605 22:36:33.364310 17 log.go:34] Running for level: 0
I0605 22:36:33.594709 17 log.go:34] Got compaction priority: {level:1 score:1.0463079996407032 dropPrefix:[]}
I0605 22:36:33.769234 17 log.go:34] Replay took: 4.638753754s
I0605 22:36:33.769948 17 log.go:34] Replaying file id: 30 at offset: 0
I0605 22:36:34.602232 17 log.go:34] Got compaction priority: {level:1 score:1.0463079996407032 dropPrefix:[]}
I0605 22:36:35.589766 17 log.go:34] Got compaction priority: {level:1 score:1.0463079996407032 dropPrefix:[]}
I0605 22:36:36.591415 17 log.go:34] Got compaction priority: {level:1 score:1.0463079996407032 dropPrefix:[]}
I0605 22:36:37.282908 17 log.go:34] Replay took: 3.51294179s
I0605 22:36:37.283441 17 log.go:34] Replaying file id: 32 at offset: 0
I0605 22:36:37.588969 17 log.go:34] Got compaction priority: {level:1 score:1.0463079996407032 dropPrefix:[]}
I0605 22:36:38.589098 17 log.go:34] Got compaction priority: {level:1 score:1.0463079996407032 dropPrefix:[]}
I0605 22:36:39.588473 17 log.go:34] Got compaction priority: {level:1 score:1.0463079996407032 dropPrefix:[]}
I0605 22:36:40.588861 17 log.go:34] Got compaction priority: {level:1 score:1.0463079996407032 dropPrefix:[]}
I0605 22:36:41.588809 17 log.go:34] Got compaction priority: {level:1 score:1.0463079996407032 dropPrefix:[]}
I0605 22:36:42.590654 17 log.go:34] Got compaction priority: {level:1 score:1.0463079996407032 dropPrefix:[]}
I0605 22:36:43.588529 17 log.go:34] Got compaction priority: {level:1 score:1.0463079996407032 dropPrefix:[]}
I0605 22:36:44.588765 17 log.go:34] Got compaction priority: {level:1 score:1.0463079996407032 dropPrefix:[]}
I0605 22:36:45.588622 17 log.go:34] Got compaction priority: {level:1 score:1.0463079996407032 dropPrefix:[]}
I0605 22:36:46.589379 17 log.go:34] Got compaction priority: {level:1 score:1.0463079996407032 dropPrefix:[]}
I0605 22:36:47.588576 17 log.go:34] Got compaction priority: {level:1 score:1.0463079996407032 dropPrefix:[]}
I0605 22:36:48.588775 17 log.go:34] Got compaction priority: {level:1 score:1.0463079996407032 dropPrefix:[]}
I0605 22:36:48.969306 17 log.go:34] LOG Compact 0->1, del 16 tables, add 12 tables, took 15.604972375s
I0605 22:36:48.969375 17 log.go:34] Compaction for level: 0 DONE
I0605 22:36:48.969405 17 log.go:34] Got compaction priority: {level:1 score:1.1713070794939995 dropPrefix:[]}
I0605 22:36:48.969469 17 log.go:34] Running for level: 1
I0605 22:36:49.588466 17 log.go:34] Got compaction priority: {level:1 score:1.0756612867116928 dropPrefix:[]}
I0605 22:36:49.588575 17 log.go:34] Running for level: 1
I0605 22:36:53.624815 17 log.go:34] Replay took: 16.341356021s
I0605 22:36:53.626226 17 log.go:34] Replaying file id: 33 at offset: 0
I0605 22:36:59.661144 17 log.go:34] LOG Compact 1->2, del 6 tables, add 5 tables, took 10.072529789s
I0605 22:36:59.661242 17 log.go:34] Compaction for level: 1 DONE
I0605 22:36:59.661270 17 log.go:34] Got compaction priority: {level:0 score:1.8 dropPrefix:[]}
I0605 22:36:59.661307 17 log.go:34] Running for level: 0
I0605 22:37:00.060919 17 log.go:34] Replay took: 6.434672385s
I0605 22:37:00.065517 17 log.go:34] Replaying file id: 34 at offset: 0
I0605 22:37:02.622775 17 log.go:34] LOG Compact 1->2, del 7 tables, add 6 tables, took 13.653276632s
I0605 22:37:02.623321 17 log.go:34] Compaction for level: 1 DONE
I0605 22:37:02.623619 17 log.go:34] Got compaction priority: {level:1 score:1.0207197666168213 dropPrefix:[]}
I0605 22:37:03.314549 17 log.go:34] Got compaction priority: {level:1 score:1.0207197666168213 dropPrefix:[]}
I0605 22:37:04.314390 17 log.go:34] Got compaction priority: {level:1 score:1.0207197666168213 dropPrefix:[]}
I0605 22:37:05.313986 17 log.go:34] Got compaction priority: {level:1 score:1.0207197666168213 dropPrefix:[]}
I0605 22:37:05.847783 17 log.go:34] Replay took: 5.78197229s
I0605 22:37:05.849232 17 log.go:34] Replaying file id: 35 at offset: 0
I0605 22:37:06.315342 17 log.go:34] Got compaction priority: {level:1 score:1.0207197666168213 dropPrefix:[]}
I0605 22:37:07.314154 17 log.go:34] Got compaction priority: {level:1 score:1.0207197666168213 dropPrefix:[]}
I0605 22:37:08.314644 17 log.go:34] Got compaction priority: {level:1 score:1.0207197666168213 dropPrefix:[]}
I0605 22:37:09.316532 17 log.go:34] Got compaction priority: {level:1 score:1.0207197666168213 dropPrefix:[]}
I0605 22:37:10.313992 17 log.go:34] Got compaction priority: {level:1 score:1.0207197666168213 dropPrefix:[]}
I0605 22:37:11.146590 17 log.go:34] Replay took: 5.297328412s
I0605 22:37:11.147670 17 log.go:34] Replaying file id: 36 at offset: 0
I0605 22:37:11.313941 17 log.go:34] Got compaction priority: {level:1 score:1.0207197666168213 dropPrefix:[]}
I0605 22:37:12.314003 17 log.go:34] Got compaction priority: {level:1 score:1.0207197666168213 dropPrefix:[]}
I0605 22:37:13.313949 17 log.go:34] Got compaction priority: {level:1 score:1.0207197666168213 dropPrefix:[]}
I0605 22:37:14.313937 17 log.go:34] Got compaction priority: {level:1 score:1.0207197666168213 dropPrefix:[]}
I0605 22:37:15.313961 17 log.go:34] Got compaction priority: {level:1 score:1.0207197666168213 dropPrefix:[]}
I0605 22:37:16.313984 17 log.go:34] Got compaction priority: {level:1 score:1.0207197666168213 dropPrefix:[]}
I0605 22:37:17.047058 17 log.go:34] Replay took: 5.899285557s
I0605 22:37:17.048969 17 log.go:34] Replaying file id: 37 at offset: 0
I0605 22:37:17.315118 17 log.go:34] Got compaction priority: {level:1 score:1.0207197666168213 dropPrefix:[]}
I0605 22:37:18.313960 17 log.go:34] Got compaction priority: {level:1 score:1.0207197666168213 dropPrefix:[]}
I0605 22:37:18.875483 17 log.go:34] LOG Compact 0->1, del 19 tables, add 12 tables, took 19.214139767s
I0605 22:37:18.875558 17 log.go:34] Compaction for level: 0 DONE
I0605 22:37:18.875589 17 log.go:34] Got compaction priority: {level:1 score:1.1544597744941711 dropPrefix:[]}
I0605 22:37:18.878380 17 log.go:34] Running for level: 1
I0605 22:37:19.314242 17 log.go:34] Got compaction priority: {level:1 score:1.0579145476222038 dropPrefix:[]}
I0605 22:37:19.314427 17 log.go:34] Running for level: 1
[Sentry] 2020/06/05 22:37:27 ModuleIntegration wasn't able to extract modules: module integration failed
[Sentry] 2020/06/05 22:37:27 Sending fatal event [1841064565464d7bae2df7acfd25dd70] to o318308.ingest.sentry.io project: 1805390
2020/06/05 22:37:27 Unable to replay logfile. Path=p/000037.vlog. Error=read p/000037.vlog: cannot allocate memory
During db.vlog.open
github.com/dgraph-io/badger/v2/y.Wrapf
/go/pkg/mod/github.com/dgraph-io/badger/v2@v2.0.1-rc1.0.20200528205344-e7b6e76f96e8/y/error.go:82
github.com/dgraph-io/badger/v2.Open
/go/pkg/mod/github.com/dgraph-io/badger/v2@v2.0.1-rc1.0.20200528205344-e7b6e76f96e8/db.go:381
github.com/dgraph-io/badger/v2.OpenManaged
/go/pkg/mod/github.com/dgraph-io/badger/v2@v2.0.1-rc1.0.20200528205344-e7b6e76f96e8/managed_db.go:26
github.com/dgraph-io/dgraph/worker.(*ServerState).initStorage
/ext-go/1/src/github.com/dgraph-io/dgraph/worker/server_state.go:167
github.com/dgraph-io/dgraph/worker.InitServerState
/ext-go/1/src/github.com/dgraph-io/dgraph/worker/server_state.go:57
github.com/dgraph-io/dgraph/dgraph/cmd/alpha.run
/ext-go/1/src/github.com/dgraph-io/dgraph/dgraph/cmd/alpha/run.go:612
github.com/dgraph-io/dgraph/dgraph/cmd/alpha.init.2.func1
/ext-go/1/src/github.com/dgraph-io/dgraph/dgraph/cmd/alpha/run.go:90
github.com/spf13/cobra.(*Command).execute
/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:830
github.com/spf13/cobra.(*Command).ExecuteC
/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:914
github.com/spf13/cobra.(*Command).Execute
/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:864
github.com/dgraph-io/dgraph/dgraph/cmd.Execute
/ext-go/1/src/github.com/dgraph-io/dgraph/dgraph/cmd/root.go:70
main.main
/ext-go/1/src/github.com/dgraph-io/dgraph/dgraph/main.go:78
runtime.main
/usr/local/go/src/runtime/proc.go:203
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1373
Error while creating badger KV posting store
github.com/dgraph-io/dgraph/x.Checkf
/ext-go/1/src/github.com/dgraph-io/dgraph/x/error.go:51
github.com/dgraph-io/dgraph/worker.(*ServerState).initStorage
/ext-go/1/src/github.com/dgraph-io/dgraph/worker/server_state.go:168
github.com/dgraph-io/dgraph/worker.InitServerState
/ext-go/1/src/github.com/dgraph-io/dgraph/worker/server_state.go:57
github.com/dgraph-io/dgraph/dgraph/cmd/alpha.run
/ext-go/1/src/github.com/dgraph-io/dgraph/dgraph/cmd/alpha/run.go:612
github.com/dgraph-io/dgraph/dgraph/cmd/alpha.init.2.func1
/ext-go/1/src/github.com/dgraph-io/dgraph/dgraph/cmd/alpha/run.go:90
github.com/spf13/cobra.(*Command).execute
/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:830
github.com/spf13/cobra.(*Command).ExecuteC
/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:914
github.com/spf13/cobra.(*Command).Execute
/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:864
github.com/dgraph-io/dgraph/dgraph/cmd.Execute
/ext-go/1/src/github.com/dgraph-io/dgraph/dgraph/cmd/root.go:70
main.main
/ext-go/1/src/github.com/dgraph-io/dgraph/dgraph/main.go:78
runtime.main
/usr/local/go/src/runtime/proc.go:203
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1373