Monday, October 17, 2022

Installing YETI with Portainer on Ubuntu 22.04

 Another day, another project.  I have been playing with some open-source Intelligence Platforms (I installed OpenCTI, and MISP recently with Portainer).  I recently found another project called YETI 'Your Everyday Threat Intelligence'

 For background, I already have a VM setup running Portainer (It is probably overworked, but it's only for testing, so not too concerned about overloading).    One change on the VM was to create a folder: 

/tmp/docker-yeti-exports 

In hindsight I would have changed the location in the docker-compose but missed it (needs to be rebuit upon reboot).

The setup of Yeti inside of Portainer took a little more than the previous builds as it could not build the image from the docker-compose.  I am new to this, so not entirely sure, but I think they don't host the image on GitHub or docker for it to build from.  

I had to download a few files from the Yeti GitHub

  •     requirements.txt
  •     dockerfile
  •     docker-entrypoint.sh
Because I will be updating the requirements.txt and I had issues with the original docker-entrypoint I had added updated the Dockerfile with the following: 

RUN git clone https://github.com/yeti-platform/yeti.git /opt/yeti;
COPY requirements.txt /opt/yeti
COPY docker-entrypoint.sh /docker-entrypoint.sh

In the requirements.txt, I added a new application and added forced version.  This is due to an issue with flask and werkzeug
flask=2.1.2
werkzeug=2.12

Next, I created a .tar file with those three files.  Those files were included in the tar because Portainer will consider the files part of the default path (not needing to include local paths in your script).  I found that information somewhere else (I think it was on Reddit). The tar file was used to create an image called yeti:latest as shown below: 


The image took a few minutes to create, after that it was time to add a new stack to Portainer.

First things first, I had to update the docker-compose for the new image. I changed out yeti1:master to yeti:latest as below

version: '3.3'
services:

  yeti:
    image: yeti:latest
    ports:
      - "5000:5000"
    command: ['webserver']
    depends_on:
      - redis
      - mongodb
    volumes:
      - /tmp/docker-yeti-exports:/opt/yeti/exports

  feeds:
    image: yeti:latest
    command: ['feeds']
    depends_on:
      - redis
      - mongodb
      - yeti
    environment:
      - TLDEXTRACT_CACHE=/tmp/tldextract.cache

  analytics:
    image: yeti:latest
    command: ['analytics']
    depends_on:
      - redis
      - mongodb
      - yeti
    environment:
      - TLDEXTRACT_CACHE=/tmp/tldextract.cache

  beat:
    image: yeti:latest
    command: ['beat']
    depends_on:
      - redis
      - mongodb
      - yeti
      - feeds
      - analytics
      - exports

  exports:
    image: yeti:latest
    command: ['exports']
    depends_on:
      - redis
      - mongodb
      - yeti
    volumes:
      - /tmp/docker-yeti-exports:/opt/yeti/exports

  oneshot:
    image: yeti:latest
    command: ['oneshot']
    depends_on:
      - redis
      - mongodb
      - yeti

  redis:
    image: redis:latest

  mongodb:
    image: mongo:4.0.12
    environment:
      - MONGO_LOG_DIR=/dev/null
    command: mongod

I created a new stack (called yeti) and deployed it. 


And the screen for YETI (Which I noticed did not have a login screen).  Not sure if that is normal btw as it's my first time using it.

Yeti Screen


Sunday, October 16, 2022

Building Vulnerability Scanners with Portainer

  Currently, I am in school for my Master, and we had an assignment to conduct vulnerability scanning on our home network.  It has been a while since I installed Nessus or OpenVAS, and technology has certainly changed. 

I have been using Portainer recently for most of my Docker containers and wanted to see if it was that easy for Nessus or OpenVAS.   

For Nessus, I did a search for 'Nessus docker-compose' 

version: '3.1'

services:

  nessus:
    image: tenableofficial/nessus
    restart: always
    container_name: nessus
    environment:
      USERNAME: <user>
      PASSWORD: <password>
      ACTIVATION_CODE: <code>
    ports:
      - 8834:8834

I changed the username/password and activation code.  Then I went into Portainer, created a new stack, and placed the above in the web editor.  

From there, I clicked deploy stack.  About 20 minutes later (plugin updates on Nessus), I was up and operational on Nessus Essentials.  One side note to this is that Essentials will only scan 16 IPs, but it's free.

For OpenVAS I searched on Google for 'OpenVAS docker-compose' and found https://github.com/immauss/openvas.  From there, I used the below:

version: "3"
services:
openvas:
ports:
- "8080:9392"
environment:
- "PASSWORD=admin"
- "USERNAME=admin"
- "RELAYHOST=172.17.0.1"
- "SMTPPORT=25"
- "REDISDBS=512" # number of Redis DBs to use
- "QUIET=false" # dump feed sync noise to /dev/null
- "NEWDB=false" # only use this for creating a blank DB
- "SKIPSYNC=true" # Skips the feed sync on startup.
- "RESTORE=false" # This probably not be used from compose... see docs.
- "DEBUG=false" # This will cause the container to stop and not actually start gvmd
- "HTTPS=false" # wether to use HTTPS or not
volumes:
- "openvas:/data"
container_name: openvas
image: immauss/openvas:$TAG
volumes:
openvas:

Same procedures as Nessus.  Opened Portainer, and added new stack.  The web editor copied the above information and deployed stack.   On this one, I forgot to update the username/password for my instance.  So that shows as a vulnerability as you conduct a scan. 

Overall, both of these installs were very easy, and I was up and running in about 30 minutes and running scans against my home network. 


Saturday, October 8, 2022

Installing MISP with Portainer on Unbuntu 22.04 VM

 I am installing MISP on the same VM that I have running OpenCTI.   As Portainer is already installed on there.   

I chose Coolacid's docker buildout

First things first you have to build out a directory structure on the host VM.  

sudo mkdir /data/compose/#/   

Additional folder under the number (mine was 2) are:

  • files
  • ssl
  • server-configs
  • logs
Back at Portainers web ui.  Select Stack from the Left menu, and click +Add Stack

Next name the stack (lower case) and use the web editor upload the docker-compose
 

Click Deploy Stack at the bottom of the page and you are ready to access the MISP login screen. 


  • Default email: admin@admin.test
  • password: admin 

Password will be required to be changed on first login



Sunday, October 2, 2022

Installing OpenCTI with Portainer on Ubuntu 22.04

 Having played around with SecurityOnion I was starting to look into Threat/Intel feeds, which lead me to a few applications:   OpenCTI, and MISP to name a few.  Today I am going to look at setting up a Docker instance of OpenCTI on Ubuntu 22.04 VM. 

While researching  OpenCTI I found documentation of setting up OpenCTI with Portainer.  Having never heard of Portainer, I first wanted to see what that was all about.

From the website for Portainer:  Container Management made easy.   Sold!  I have used Docker a few times, but mostly basic stuff like setting up a container, inspecting the container, ect.  So I don't really have much experience, but from the looks of Portainer, it has a GUI front end and works with Docker and Kubernetes.  I figured I could use it as I was going to use this system later to install a Docker instance of MISP on the same machine.  

The basis of the install procedures came from here.  

I had selected "Docker" option while installing Ubuntu 20.24 server.  So I skipped the first part, and started with creating a swarm (On one computer mind)

docker swarm init --advertise-addr 192.168.1.100

This will setup a Docker swarm and my machine is the Manager node.  

Installing Portainer

Below are the commands I ran on my Ubuntu VM for initial setup of Portainer.

mkdir -p /opt/portainer
cd /opt/portainer
curl -L https://downloads.portainer.io/portainer-agent-stack.yml -o portainer-agent-stack.yml

I updated the Ports associated in the portainer-agent-stack.yml (due to a conflict with OpenCTI)
       
         ports:
            -19000:9000
            -18000:8000

Last step is deploy the Docker container

docker stack deploy --compose-file=portainer-agent-stack.yml portainer

Access Portainer from <UbuntuVM_IP>:1900 




Installing OpenCTI

OpenCTI will be installed from within Portainer.  A docker-compose file is required for the installation.


This version had connectors setup for OTX, greynoise, abuseip, shodan, inetzer, and a few others.  A few configuration are required with the above file, for instance, you will need to update all the UUIDs and add in your API from the above sites.  Lastly, make sure you add your email address/password into the file in the below section:

    - APP__ADMIN__EMAIL=
    - APP__ADMIN__PASSWORD=


When logged into Portainer you create a new stack as shown below:


Next you provide a name and copy the docker file into the web editor as show below: 


Lastly deploy the stack and wait about 30 minutes for it to fully build.  Once complete you will be able to access the site at https://ip:8080.








  




Friday, July 1, 2022

Critical Path Security: SecurityOnion@Home

Recently  I have been trying to learn more about other types of information you can get from Zeek/Suricata (IP Reputation/DNS Rep), which previously lead me to add IOCs to Suricata with Datasets.  

Today I am adding CriticalPathSecurity Threat Intel to Zeek on Security Onion 2.3.130.  Overall it was a pretty simple install, and only really required one file edit (Salt file).  

Following these steps

  • Clone the Critical Path Security Intelligence Feeds:
    • git clone https://github.com/CriticalPathSecurity/Zeek-Intelligence-Feeds.git /opt/so/saltstack/local/zeek/policy/intel/Zeek-Intelligence-Feeds
  • Copy the __load.zeek__ from default to local
    • cp /opt/so/saltstack/default/zeek/policy/intel/ /opt/so/saltstack/local/zeek/policy/intel/
  • Edit __load.zeek__
    • (Added @load integration/collective-intel   & file instead of using one intel.dat, I added each file separately under the folder that Salt/Docker matches out on the host machine)
  • Update Salt
    • salt systemname_standalone system.highstate 

__load.zeek__



Lets check the Intel Dashboards under Security Onion 2.3.130

Intel Dashboard Security Onion 2.3.130

The first IP address listed here was from abuse.ch and I did a nslookup for it to appear in the list.   

 


Tuesday, June 28, 2022

Suricata: DataSets for IOCs

 After reviewing MalTrail, I wanted to see if there were other ways to provide the same/close to the same type of information based on software I was already using.  This led me to Suricata and IPREP, and DATASETS.  

While reading up, I found an article over at IDSTower about Datasets, and figured this would be a good starting point for comparing the two applications.   

The article was setting up Datasets for bad domains, and  based on the instructions it would be an easy add to Security Onion without having to really mess with any of the Salt files (I believe IPREP I will have to make some changes to Salt files, but that will be another article.)

Overall the process was pretty simple, I did change one thing, instead of adding iocs.rules, I added my rule to local.rules, and ran so-rule-update.   I believe to add the iocs.rules as a separate source I will have to edit the Salt file for the IDSTools Docker. 

After I had set up the new rule, I did a nslookup to a bad site:  (From the Alerts Pane).  


I think I will have to make a custom alert to see more information on these, or maybe a dashboard might work.  

I would like to see the name of the bad IP/DNS entry, and possibly the country/region for the IP/DNS for a quick alert view pane.  

Now MalTrail looks like:  (screen capture taken from their Demo)


I would think to get the "info" section I would need to break out the DataSets per type of IOC as the Alert Description, the other ones are pretty standard fields I could pull.

Next, I think I will try and pull the MalTrail data into Logstash in SecurityOnion, think I read somewher e there is a Logstash setting in MalTrail, or it should be pretty easy to use the data file created from MalTrail to ingest into Logstash.  



Sunday, June 26, 2022

Malcom NSM - Installation

 Using the directions from their Github, I set out tonight to set up Malcolm NSM.  As I was running out of space on my machine (and truthfully too lazy to open it up and add more drives, I purchased an NVME M2 external USB 3.1 device)   

I am using a SAMSUNG 780 500Gb for the drive, and that is where I am going to store my VM of Malcolm NSM.  

There are a few ways to install Malcolm NSM

  1. ISO install (Which I tried but failed, don't think I made it big enough)
  2.  Ubuntu Starting Image 
  3. From Source.

I picked #2.  

I set up my VM with two NICs, 10Gb Memory, 4 cores, and 100Gb space (The first few times I tried this I ran out of space at 25Gb (hench the above comment about expanding my HD space on the machine)

With Ubuntu up and operational, it was time to start the installation.  

sudo /scripts/install.py

And I got my first error (well it never really shows an error, but after digging, it looks like Docker is not being installed with the script).  The error was trying to add a user to the docker group (which does not exist). 

I tried to review the install.py but it pretty big file to review, so I decided to follow other guides in getting docker installed on Ubuntu Jammy InRelease. 

sudo apt install apt-transport-https ca-certificates curl software-properties-common
 sudo mkdir -p /etc/apt/keyrings
 curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
 echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update && sudo apt-get upgrade
sudo apt-get install docker-ce
Now I was able to run the install.py without any errors (A lot of responses were required though)
From there you have to do a reboot, and re-run install.py again.  

After that, you run the command ./auth-setup
Then you run docker-compose pull   (This is where I ran out of space the first time)
And finally ./start.   

Maybe tomorrow I will write up initial impressions of Malcolm NSM