Opened 2 years ago

Closed 2 years ago

#33 closed Task (Fixed)

Ship card and OS to Joe

Reported by: D Delmar Davis Owned by: Joe Dumoulin
Priority: Priority Milestone: Make Shit Happen / Own Your Shit.
Component: Development Keywords:
Cc: Joe Dumoulin

Description

DeeDee has the same baseline that I am using for Joey. Our basement server.

DeeDee Has

  • Ubuntu LTS
  • LXC+ZFS+base ubuntu image and profile
  • Ansible and some base playbooks.
  • 2 hosts
    • Pihole
    • squid local proxy

Dee Dee does not have

  • 192.168.9. Addressing.
  • Private Cloud Server

Attachments (1)

proxipihole.png (109.0 KB) - added by D Delmar Davis 2 years ago.
Proxy Pihole Client settings.

Download all attachments as: .zip

Change History (10)

comment:1 Changed 2 years ago by D Delmar Davis

Deedee also has the ssacli utility so you can talk to and configure the raid controller.

The firmware on the smart array board is good enough (6.50)

There is an rpm for 6.64 at https://support.hpe.com/hpsc/swd/public/detail?swItemId=MTX-2fe5ac5b7d9d489088825f3a4e but its a pain in the ass to install and I didn't adequately document what I did to make it work. If I upgrade the card in Joey then I will document it. I remember having to pull the rpm contents out by hand and then tweaking on the resulting scripts.

comment:2 Changed 2 years ago by D Delmar Davis

root@DeeDee:~# ssacli
Smart Storage Administrator CLI 3.30.13.0
Detecting Controllers...Done.
Type "help" for a list of supported commands.
Type "exit" to close the console.

=> => controller all show detail

Smart Array P812 in Slot 3
   Bus Interface: PCI
   Slot: 3
   Serial Number: PAGXQ0BRH2Y01H
   Cache Serial Number: PBCDF0CRH3M1FW
   RAID 6 (ADG) Status: Enabled
   Controller Status: OK
   Hardware Revision: C
   Firmware Version: 6.50-0
   Rebuild Priority: Medium
   Expand Priority: Medium
   Surface Scan Delay: 15 secs
   Surface Scan Mode: Idle
   Parallel Surface Scan Supported: No
   Queue Depth: Automatic
   Monitor and Performance Delay: 60  min
   Elevator Sort: Enabled
   Degraded Performance Optimization: Disabled
   Inconsistency Repair Policy: Disabled
   Wait for Cache Room: Disabled
   Surface Analysis Inconsistency Notification: Disabled
   Post Prompt Timeout: 0 secs
   Cache Board Present: True
   Cache Status: OK
   Cache Ratio: 25% Read / 75% Write
   Drive Write Cache: Disabled
   Total Cache Size: 1.0
   Total Cache Memory Available: 0.9
   No-Battery Write Cache: Disabled
   Cache Backup Power Source: Capacitors
   Battery/Capacitor Count: 1
   Battery/Capacitor Status: OK
   SATA NCQ Supported: True
   Number of Ports: 6 (2 Internal / 4 External )
   Encryption: Not Set
   Driver Name: hpsa
   Driver Version: 3.4.20
   Driver Supports SSD Smart Path: True
   PCI Address (Domain:Bus:Device.Function): 0000:1C:00.0
   Port Max Phy Rate Limiting Supported: False
   Sanitize Erase Supported: False
   Primary Boot Volume: Unknown (600508B1001CECD99E73232D848344F1)
   Secondary Boot Volume: None

comment:3 Changed 2 years ago by D Delmar Davis

To move the server to your lan you need to edit /etc/netplan/50-cloud-init.yaml

You will also need to do the same for the two containers (lxc exec <container> bash).

You will also need to adapt the Ansible scripts and clean out the cruft from my local implementation

(I think you can do some fancy sed 'in place' edit)

root@DeeDee:/etc# grep -r 192.168.0 *
ansible/hosts:ip_gateway=192.168.0.1
ansible/hosts:ip_dns_server=192.168.0.1
ansible/hosts:deedee  ip_address=192.168.0.45 purpose="Bresgal Home Server" ansible_connection=local
ansible/hosts:#joey  	ip_address=192.168.0.65 purpose="Local Cloud Server"  ansible_host=joey.local ansible_user='annie' ansible_become=yes ansible_become_user=root ansible_become_pass='W3r3N3ts!'
ansible/hosts:#lilly  ip_address=192.168.0.67 purpose="Local Documentation Server"
ansible/hosts:#corbin ip_address=192.168.0.75 purpose="Userland Test Local Server" 
ansible/hosts:#viva   ip_address=192.168.0.77 purpose="Model Container for SMPhase II"
ansible/hosts:#guy    ip_address=192.168.0.76 purpose="AFS Server" 
ansible/hosts:squid 	ip_address=192.168.0.252 purpose="Local Caching Server"
ansible/hosts:pihole 	ip_address=192.168.0.254 purpose="Filtering DNS"
ansible/hosts:#pihole ip_address=192.168.0.254 purpose="Filtering DNS"   ansible_host=joey:pihole
ansible/hosts:#woz ip_address=192.168.0.254 purpose="AFS Server"   ansible_host=joey:woz
ansible/network.data:            address: 192.168.0.77
ansible/network.data:            gateway: 192.168.0.1
ansible/network.data:        address: 192.168.0.1
ansible/files/susdev19.profile.yaml:            address: 192.168.0.200
ansible/files/susdev19.profile.yaml:            gateway: 192.168.0.1
ansible/files/susdev19.profile.yaml:        address: 192.168.0.1
ansible/files/susdev19.profile.yaml:        DNS=192.168.0.1 198.202.31.141 8.8.4.4
ansible/playbooks/display-facts.retry:ip_dns_server=192.168.0.1
ansible/playbooks/display-facts.retry:ip_gateway=192.168.0.1
avahi/hosts:# 192.168.0.1 router.local
netplan/50-cloud-init.yaml:            - 192.168.0.45/24
netplan/50-cloud-init.yaml:        gateway4: 192.168.0.1
netplan/50-cloud-init.yaml:                - 192.168.0.1

comment:4 Changed 2 years ago by D Delmar Davis

Owner: changed from D Delmar Davis to Joe Dumoulin
Status: newassigned

UPS Tracking 1z945X040321817585

Username Password are on Boot Disk (Connect to on board sata).
Thats a Zer(0)

Should be there tomorrow.

Assigning this to you.

comment:5 Changed 2 years ago by D Delmar Davis

Joe,

Basically all you need to do is edit /etc/netplan/50-cloud-init.yaml and then "netplan apply" and reboot.

Do that for the containers as well.

For new containers you will want to lxc edit profile susdev19.

The ansible stuff should work once you adjust the /etc/ansible/hosts file. the display-facts.retry is cruft.

Let me know if you get stuck or need anything.

Don

comment:6 Changed 2 years ago by Joe Dumoulin

All hardware received and installed. everything is running correctly.

Deedee is up on my network and I have changed the LAN ips for the host and the lxc instances. I am working on the config for pihole still as I am still not doing something right. I think it is DNS and will work on it this evening. Once it is working I will close this ticket.

Changed 2 years ago by D Delmar Davis

Attachment: proxipihole.png added

Proxy Pihole Client settings.

comment:7 Changed 2 years ago by D Delmar Davis

On my setup I have a network location called home (the router advertises the pihole as the dns server)
Proxy Pihole Client settings.

comment:8 Changed 2 years ago by D Delmar Davis

Also you will want to update and set the password on the pihole admin interface.

root@annie:~# lxc exec pihole bash
root@pihole:~# pihole -up
  [i] Checking for updates...
...
  [i] The install log is located at: /etc/pihole/install.log
Update Complete! 

  Current Pi-hole version is v4.3.2
  Current AdminLTE version is v4.3.2
  Current FTL version is v4.3.1
root@pihole:~# pihole -a -p
Enter New Password (Blank for no password): 
...

I should have more write up on the pihole but I figured you had been using it for a while. My bad....

comment:9 Changed 2 years ago by Joe Dumoulin

Resolution: Fixed
Status: assignedclosed

Alles gut. everything is working. I am adding this to my .9 network tonight.

Note: See TracTickets for help on using tickets.