Wednesday, April 2, 2025

Troubleshooting Docker Image Format: Ensuring Docker v2 Instead of OCI

 

Troubleshooting Docker Image Format: Ensuring Docker v2 Instead of OCI

Introduction

While working with Docker 27+, I encountered an issue where images were being saved in the OCI format instead of the expected Docker v2 schema format. This created compatibility challenges with existing workflows and required a deep dive into Docker's default behaviors, BuildKit, and potential workarounds. In this post, I will walk through the different solutions we explored and how we ultimately resolved the issue.


The Problem: Docker Save Producing OCI Format

When using the docker save command, we expected the output to be in Docker v2 format. However, in our environment, the images were getting stored in OCI format, which was causing issues with certain tools that depended on the legacy Docker format.

We confirmed this by inspecting the saved image:

 tar -tf myimage.tar | head -n 10

If the output contained oci-layout, it indicated that the image was stored in OCI format.


Step 1: Understanding BuildKit’s Role

Starting from Docker 23+, BuildKit is enabled by default. BuildKit improves performance, caching, and parallel execution but also defaults to OCI format unless explicitly configured otherwise.

To check if BuildKit is enabled, we ran:

docker info | grep "BuildKit"

If the output showed BuildKit: true, we knew that the default build system was active and potentially affecting image formats.


Step 2: Disabling BuildKit to Enforce Docker v2 Format

To ensure that Docker saved images in Docker v2 format, we disabled BuildKit using:

DOCKER_BUILDKIT=0 docker save -o myimage.tar my-image:tag

This command forces Docker to use its legacy build system, which saves images in Docker v2 format rather than OCI.


Step 3: Verifying Image Format

After saving the image, we checked the contents again:

tar -tf myimage.tar | head -n 10

If oci-layout was missing and instead we saw manifest.json and layer.tar, it confirmed that the image was now in Docker v2 format.


Step 4: Using Buildx to Ensure Docker Format

If BuildKit had to remain enabled, another approach was using Buildx to explicitly enforce the Docker format:

docker buildx build --output type=docker -t my-image:tag .

Then, saving the image:

docker save -o myimage.tar my-image:tag

This ensured the image was stored using Docker v2 format, even when BuildKit was active.


Step 5: Converting OCI Images to Docker v2 Format

For images that were already saved in OCI format, we used Skopeo to convert them:

skopeo copy oci-archive:myimage.tar docker-archive:myimage-docker.tar

This allowed us to work with the Docker-compatible format without rebuilding the image.


Final Solution and Takeaways

Key Fixes We Found:

Disable BuildKit before saving: DOCKER_BUILDKIT=0 docker save -o myimage.tar my-image:tagUse Buildx to enforce Docker format: docker buildx build --output type=docker -t my-image:tag .Verify saved image format using tar -tf myimage.tarConvert existing OCI images to Docker v2 using skopeo

This process helped us ensure that our Docker images remained in Docker v2 format, avoiding compatibility issues with existing workflows.


Conclusion

Understanding Docker’s default behavior in newer versions and how BuildKit affects image formats was crucial in solving this issue. If you're facing similar problems with Docker images defaulting to OCI format, these solutions should help enforce Docker’s legacy format where needed. 🚀

Feel free to share if you've faced similar challenges and what solutions worked for you!

Friday, March 21, 2025

How to Create a ConfigMap with Multiple Files in Kubernetes?

 

How to Create a ConfigMap with Multiple Files in Kubernetes

In Kubernetes, a ConfigMap is used to store configuration data such as environment variables, configuration files, or command-line arguments. When working with multiple configuration files, you may need to create a ConfigMap where each file name acts as a key. This blog explains how to achieve that using different methods.

1. Creating a ConfigMap from Multiple Files

If you have multiple files and want to use their names as keys, you can use the following command:

kubectl create configmap my-config --from-file=/path/to/file1.txt --from-file=/path/to/file2.yaml --from-file=/path/to/file3.json -n my-namespace

Example:

If you have three files:

  • /config/file1.txt
  • /config/file2.yaml
  • /config/file3.json

Run:

kubectl create configmap my-config --from-file=/config/file1.txt --from-file=/config/file2.yaml --from-file=/config/file3.json

This will create a ConfigMap where:

  • file1.txt, file2.yaml, and file3.json will be keys.
  • The contents of these files will be stored as values.

2. Creating a ConfigMap from a Directory

If all configuration files are stored in a directory, you can create a ConfigMap from the entire directory:

kubectl create configmap my-config --from-file=/config-directory -n my-namespace

This will include all files in /config-directory as keys in the ConfigMap.

3. Creating a ConfigMap Using YAML

You can manually create a ConfigMap using a YAML file:

apiVersion: v1
kind: ConfigMap
metadata:
  name: my-config
  namespace: my-namespace
data:
  file1.txt: |
    This is the content of file1.
  file2.yaml: |
    key: value
  file3.json: |
    { "name": "example", "type": "json" }

Apply it using:

kubectl apply -f my-config.yaml

4. Mounting the ConfigMap in a Pod

Once created, you can mount the ConfigMap as files inside a pod:

apiVersion: v1
kind: Pod
metadata:
  name: my-pod
  namespace: my-namespace
spec:
  containers:
    - name: my-container
      image: busybox
      volumeMounts:
        - name: config-volume
          mountPath: /etc/config  # Files will be available here
  volumes:
    - name: config-volume
      configMap:
        name: my-config

5. Accessing ConfigMap Data

Once the ConfigMap is mounted inside the pod, you can access the files:

kubectl exec -it my-pod -- cat /etc/config/file1.txt

Conclusion

Using ConfigMaps in Kubernetes helps manage configuration files efficiently. Whether you create a ConfigMap from individual files, directories, or manually via YAML, Kubernetes makes it easy to inject configurations into your applications.

Let us know if you have any questions! 🚀

Reference

For more details, visit the official Kubernetes documentation: Kubernetes ConfigMap

Thursday, March 20, 2025

How to Optimize Kubernetes Performance in 2025

 Kubernetes continues to be the backbone of cloud-native infrastructure in 2025. However, as workloads scale, optimizing Kubernetes performance becomes crucial for cost savings, efficiency, and reliability. In this guide, we’ll explore cutting-edge techniques to optimize Kubernetes performance and keep your clusters running smoothly.


1. Use Efficient Autoscaling Strategies

✅ Horizontal Pod Autoscaler (HPA)

  • Scale workloads dynamically based on CPU, memory, or custom metrics.
  • Use KEDA (Kubernetes Event-Driven Autoscaling) for event-based scaling.

✅ Vertical Pod Autoscaler (VPA)

  • Adjust resource requests and limits automatically to optimize pod performance.

✅ Cluster Autoscaler

  • Automatically adds or removes nodes based on workload demand.
  • Works well with AWS EKS, GCP GKE, and Azure AKS.

💡 Pro Tip: Combine HPA and VPA for optimal autoscaling!


2. Optimize Kubernetes Resource Requests & Limits

  • Set appropriate CPU & memory requests to prevent resource wastage.
  • Avoid over-provisioning to reduce cloud costs.
  • Use Goldilocks to analyze and recommend optimal resource settings.

🚀 Example: If your pod requests 2 CPU but uses only 0.5 CPU, adjust requests to 0.75 CPU to free up resources.


3. Use Node and Pod Affinity for Better Scheduling

  • Node Affinity: Ensure critical workloads run on high-performance nodes.
  • Pod Affinity & Anti-Affinity: Optimize pod placement to reduce latency.
  • Taints & Tolerations: Keep sensitive workloads isolated.

💡 Example: Use Anti-Affinity to prevent all replicas from running on the same node, improving fault tolerance.


4. Implement Efficient Networking Practices

  • Use CNI Plugins: Choose optimized networking solutions like Cilium or Calico.
  • Enable gRPC Load Balancing for high-performance microservices.
  • Optimize Ingress Controllers: Use NGINX Ingress or Traefik for better performance.
  • Use Multi-NIC for High Traffic Apps to split traffic across interfaces.

📌 Bonus: Monitor DNS latencies to prevent slow service discovery.


5. Enable Persistent Storage Optimization

  • Use ReadWriteMany (RWX) storage classes for shared storage access.
  • Optimize Persistent Volume Claims (PVCs) to avoid excessive provisioning.
  • Prefer NVMe SSDs over traditional storage for I/O-intensive workloads.
  • Enable Filesystem Caching to speed up read-heavy applications.

💡 Example: AWS EFS or Azure Files can be used for cost-efficient shared storage in Kubernetes.


6. Use Service Mesh for Performance Gains

  • Deploy a lightweight service mesh like Linkerd instead of heavy Istio.
  • Optimize gRPC communication for microservices.
  • Reduce sidecar overhead by enabling eBPF-based networking.

📌 2025 Trend: Many organizations are replacing sidecar proxies with eBPF-based CNI plugins to boost network performance.


7. Improve Logging and Monitoring Efficiency

  • Use Loki + Promtail instead of ELK for cost-effective log aggregation.
  • Enable Prometheus Remote Write to store long-term metrics efficiently.
  • Reduce Kubernetes audit logs retention to avoid unnecessary storage costs.
  • Use Grafana Cloud or OpenTelemetry for scalable observability.

🚀 Example: Reduce Prometheus scrape intervals from 15s to 30s to save CPU resources.


8. Optimize Container Image Size & Startup Time

  • Use distroless images instead of full OS-based images.
  • Enable Lazy Loading (CRI-O, Dragonfly) for faster container startup.
  • Minimize image size by removing unnecessary dependencies.

💡 Example: Instead of using ubuntu:latest, use gcr.io/distroless/base to reduce attack surface and improve performance.


9. Secure & Optimize API Server Performance

  • Use API Priority & Fairness (APF) to prevent high-priority workloads from being throttled.
  • Reduce excessive kubectl get queries to minimize API server load.
  • Cache API requests using kube-proxy or external caching layers.

📌 Trend: Many enterprises are using Kube-Proxy-less architectures to reduce network overhead.


10. Use Cost Optimization Tools

  • Use Kubecost to track Kubernetes spend and optimize resource allocation.
  • Right-size node instances using Karpenter (AWS) or Cluster Autoscaler.
  • Implement Spot & Preemptible Nodes for cost savings.

🚀 Example: Running workloads on Spot Instances can save 50-80% on cloud costs.


Conclusion: Keep Your Kubernetes Cluster Running at Peak Performance!

By implementing these cutting-edge optimizations, you can reduce costs, improve performance, and ensure a smooth-running Kubernetes environment in 2025. Whether it’s autoscaling, resource optimization, networking, storage, or cost efficiency, these best practices will help you stay ahead.Thanks for reading 

👉 Which strategy are you implementing first? Drop a comment below! 🚀

Sunday, March 2, 2025

What is Kubeflow

 

What is Kubeflow? A Comprehensive Guide

Introduction

As machine learning (ML) workloads grow more complex, organizations need efficient ways to manage, deploy, and scale their ML models. Kubeflow is an open-source platform designed to streamline and automate machine learning workflows on Kubernetes. It provides a powerful, scalable, and portable ML toolkit that enables data scientists and engineers to focus on model development rather than infrastructure management.


What is Kubeflow?

Kubeflow is a machine learning (ML) platform that runs on Kubernetes. It is designed to make ML model training, deployment, and orchestration easier by leveraging Kubernetes’ scalability and resource management capabilities.

Key Features of Kubeflow:

  • Scalability: Utilizes Kubernetes to manage large-scale ML workloads.
  • Portability: Runs on various cloud providers and on-premises Kubernetes clusters.
  • Multi-Framework Support: Supports TensorFlow, PyTorch, XGBoost, and other ML frameworks.
  • Pipeline Orchestration: Allows for the creation, execution, and monitoring of ML workflows.
  • Model Serving: Deploys and manages trained ML models using TensorFlow Serving, KFServing, and Seldon.
  • Hyperparameter Tuning: Enables automatic model optimization with Katib.

Why Use Kubeflow?

1. Simplified ML Lifecycle Management

Kubeflow abstracts away the complexities of Kubernetes, allowing ML engineers to focus on model training, tuning, and deployment without deep Kubernetes expertise.

2. Reproducibility and Collaboration

With Kubeflow Pipelines, users can create and share ML workflows, ensuring reproducibility and efficient team collaboration.

3. Scalable ML Training

Kubeflow optimizes resource allocation, enabling large-scale distributed training using Kubernetes-native capabilities like GPUs and TPUs.

4. End-to-End Automation

From data preparation to model training, evaluation, and serving, Kubeflow automates the entire ML workflow.

Key Components of Kubeflow

1. Kubeflow Pipelines

A tool for designing, deploying, and managing ML workflows as directed acyclic graphs (DAGs). It enables reproducibility and version control of ML experiments.

2. Katib (Hyperparameter Tuning)

Automates hyperparameter tuning to optimize ML model performance.

3. KFServing (Model Serving)

Provides serverless ML model deployment, integrating with Knative for efficient inference.

4. Notebooks

Supports Jupyter notebooks, allowing data scientists to develop and experiment in an interactive environment.

How to Get Started with Kubeflow

  1. Install Kubeflow on your Kubernetes cluster:
    kfctl apply -V -f https://github.com/kubeflow/manifests/archive/master.tar.gz
    
  2. Deploy ML pipelines using Kubeflow Pipelines UI or CLI.
  3. Train and serve models with TensorFlow, PyTorch, or Scikit-learn.

Conclusion

Kubeflow is a game-changer for organizations adopting MLOps. By integrating seamlessly with Kubernetes, it enables scalable, portable, and automated ML workflows, making it a preferred choice for modern AI-driven applications.


Have you tried Kubeflow? Share your thoughts in the comments Please!

Kubernetes AI/ML Integration 2025

 

Kubernetes AI/ML Integration: Revolutionizing Machine Learning Workflows

Introduction

Artificial Intelligence (AI) and Machine Learning (ML) have become essential for businesses looking to gain insights, automate processes, and build intelligent applications. Kubernetes, the industry-standard container orchestration platform, provides a scalable and flexible infrastructure for deploying AI/ML workloads efficiently. This blog explores how Kubernetes enhances AI/ML workflows, key tools, and best practices for integration.


Why Use Kubernetes for AI/ML?

1. Scalability

Kubernetes enables seamless scaling of AI/ML workloads, ensuring efficient resource allocation based on demand.

2. Resource Management

With support for GPU scheduling and optimized workload distribution, Kubernetes ensures efficient use of computing resources for training and inference.

3. Reproducibility & Portability

Containerized ML models can be easily deployed and moved across environments, eliminating inconsistencies in development and production setups.

4. Automation & Orchestration

Kubernetes automates deployment, monitoring, and scaling of ML workflows, reducing manual intervention and operational overhead.


Key Tools for AI/ML on Kubernetes

1. Kubeflow

Kubeflow is an open-source AI/ML toolkit for Kubernetes, designed to streamline model training, deployment, and monitoring.

  • Supports TensorFlow, PyTorch, and other ML frameworks
  • Provides Jupyter notebooks for interactive experimentation
  • Automates hyperparameter tuning with Katib

2. MLflow

An open-source platform for managing ML lifecycles, including experiment tracking, model packaging, and deployment on Kubernetes.

3. KServe (formerly KFServing)

A Kubernetes-native serving solution for deploying scalable and efficient ML models.

  • Supports multi-framework model serving
  • Provides autoscaling with Knative
  • Enables A/B testing and model versioning

4. TensorFlow Serving & TorchServe

These tools provide optimized model serving for TensorFlow and PyTorch on Kubernetes.


How to Deploy an AI/ML Model on Kubernetes

Step 1: Containerize the Model

Package your trained ML model into a Docker container:

FROM tensorflow/serving
COPY ./model /models/my_model
ENV MODEL_NAME=my_model

Step 2: Define a Kubernetes Deployment

Create a Kubernetes Deployment YAML file to deploy the model container:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ml-model-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: ml-model
  template:
    metadata:
      labels:
        app: ml-model
    spec:
      containers:
      - name: ml-model
        image: myregistry/my-ml-model:latest
        ports:
        - containerPort: 8501

Step 3: Expose the Model as a Service

apiVersion: v1
kind: Service
metadata:
  name: ml-model-service
spec:
  selector:
    app: ml-model
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8501
  type: LoadBalancer

Step 4: Deploy to Kubernetes

kubectl apply -f deployment.yaml
kubectl apply -f service.yaml

Best Practices for AI/ML on Kubernetes

  • Use GPU Nodes: Leverage Kubernetes GPU support for accelerated model training.
  • Implement CI/CD Pipelines: Automate model deployment using tools like ArgoCD or Jenkins.
  • Monitor Model Performance: Integrate Prometheus and Grafana for real-time monitoring.
  • Optimize Resource Allocation: Use Kubernetes-native tools like Horizontal Pod Autoscaler (HPA) and Vertical Pod Autoscaler (VPA).

Conclusion

Kubernetes simplifies AI/ML deployment by offering scalability, automation, and resource efficiency. By leveraging tools like Kubeflow, MLflow, and KServe, organizations can build robust AI pipelines and accelerate innovation. As AI continues to evolve, Kubernetes remains a critical enabler of next-generation machine learning applications.


What AI/ML workloads are you running on Kubernetes in 2025? Share your experience in the comments please!

What is docker init Command?

 

Understanding the docker init Command

Introduction

Docker is a powerful containerization platform that simplifies application deployment and management. One of its lesser-known but useful commands is docker init. This blog explores the purpose, usage, and benefits of the docker init command.


What is docker init?

The docker init command is a feature introduced to help users quickly set up a new Docker project. It automatically generates a Dockerfile, .dockerignore, and other essential configuration files, streamlining the containerization process.

Key Features:

  • Automatically creates a Dockerfile with best practices.
  • Generates a .dockerignore file to optimize build performance.
  • Provides an interactive setup process for customizing configurations.

How to Use docker init

1. Basic Usage

To initialize a new Docker project, navigate to your project directory and run:

docker init

This will prompt you to configure various options for your containerized application.

2. Interactive Prompts

When running docker init, you'll be asked to provide details such as:

  • Application type (e.g., Node.js, Python, Java, etc.)
  • Base image selection
  • Port configurations
  • Runtime dependencies

3. Generated Files

After running docker init, the following files are created:

  • Dockerfile: Contains the build instructions for your container.
  • .dockerignore: Specifies files to be excluded from the image build.
  • Additional configuration files based on the selected application type.

Benefits of Using docker init

  • Saves Time: Automates the setup of a Docker environment.
  • Ensures Best Practices: Generates optimized Dockerfiles.
  • Reduces Errors: Helps prevent common pitfalls in containerization.

Conclusion

The docker init command is a valuable tool for both beginners and experienced developers looking to quickly set up a containerized application. By automating the creation of essential Docker files, it simplifies the process of getting started with Docker.


Have you tried docker init? Share your experience in the comments!

Kubernetes Interview Questions 2025

 

Kubernetes Interview Questions 2025 (With Answers)

Kubernetes continues to be the backbone of modern container orchestration, making it a key skill for DevOps engineers, cloud architects, and developers. If you're preparing for a Kubernetes interview in 2025, here are some essential questions along with their answers.


1. What is Kubernetes, and why is it used?

Answer:

Kubernetes is an open-source container orchestration platform used to automate the deployment, scaling, and management of containerized applications. It enables efficient resource utilization, self-healing, and declarative configuration, making it essential for modern cloud-native applications.

2. Explain the core components of Kubernetes architecture.

Answer:

Kubernetes architecture consists of:

  • Master Node: Includes API Server, Controller Manager, Scheduler, and etcd (key-value store).
  • Worker Nodes: Hosts containerized applications and includes Kubelet, Kube-Proxy, and Container Runtime (Docker, CRI-O, containerd).
  • Pods: The smallest deployable unit, containing one or more containers.
  • Services: Abstracts network access to a set of pods.
  • Namespaces: Logical partitions for organizing resources.

3. What are Deployments in Kubernetes?

Answer:

A Deployment is a Kubernetes resource that manages a ReplicaSet and ensures the desired number of pod replicas are running. It supports rolling updates, rollbacks, and declarative updates to applications.

4. How does Kubernetes handle networking?

Answer:

Kubernetes networking follows a flat network model, where every pod gets a unique IP address. It includes:

  • ClusterIP: Internal service within the cluster.
  • NodePort: Exposes services on a static port on each node.
  • LoadBalancer: Integrates with cloud provider’s load balancer.
  • Network Policies: Control communication between pods.

5. What is a Persistent Volume (PV) and Persistent Volume Claim (PVC)?

Answer:

  • Persistent Volume (PV): A cluster-wide storage resource provisioned by admins.
  • Persistent Volume Claim (PVC): A request for storage made by users, which binds to an available PV.

6. What are StatefulSets in Kubernetes?

Answer:

StatefulSets manage stateful applications, ensuring ordered deployment, unique pod identities, and stable storage. Ideal for databases like MySQL, PostgreSQL, and Apache Kafka.

7. How do you scale applications in Kubernetes?

Answer:

Applications can be scaled using:

  • Horizontal Pod Autoscaler (HPA): Scales pods based on CPU/memory.
  • Vertical Pod Autoscaler (VPA): Adjusts resource requests for pods.
  • Cluster Autoscaler: Adjusts node count based on pending pod demands.

8. How does Kubernetes handle secrets and configuration management?

Answer:

  • ConfigMaps: Store non-sensitive configuration data like environment variables.
  • Secrets: Store sensitive information such as passwords, tokens, and certificates in an encrypted format.

9. What is the difference between ReplicaSet and ReplicationController?

Answer:

  • ReplicationController: Ensures a specified number of pod replicas are running.
  • ReplicaSet: An improved version of ReplicationController that supports set-based label selectors for more flexible pod selection.

10. What is the difference between DaemonSet and Deployment?

Answer:

  • DaemonSet: Ensures a copy of a pod runs on all or some nodes (e.g., logging agents, monitoring).
  • Deployment: Manages stateless applications and ensures the required number of pod replicas are maintained.

11. How does Kubernetes handle rolling updates and rollbacks?

Answer:

  • Rolling Update: Kubernetes updates pods in a controlled manner, ensuring zero downtime.
  • Rollback: If an update fails, Kubernetes can revert to a previous working version using kubectl rollout undo deployment <deployment_name>.

12. What are Kubernetes Jobs and CronJobs?

Answer:

  • Job: Runs a task to completion, ensuring the specified number of successful completions.
  • CronJob: Schedules Jobs to run at specified times (like a Linux cron job).

13. How does Kubernetes manage multi-tenancy?

Answer:

Multi-tenancy in Kubernetes is achieved using:

  • Namespaces: Isolate resources for different teams or projects.
  • RBAC (Role-Based Access Control): Restricts access based on user roles.
  • Resource Quotas & Limit Ranges: Control resource usage per namespace.

14. What is Helm in Kubernetes?

Answer:

Helm is a package manager for Kubernetes that simplifies application deployment using Helm Charts. It allows version control, dependencies management, and easy updates.

15. What is a Service Mesh in Kubernetes?

Answer:

A Service Mesh manages service-to-service communication, providing traffic management, security, and observability. Examples include Istio, Linkerd, and Consul.

16. What is the difference between Kubernetes and OpenShift?

Answer:

Feature Kubernetes OpenShift
Installation Complex Easier with built-in tools
Security Requires configuration Built-in security policies
UI & Developer Tools Minimal Rich web console
CI/CD Integration External tools needed Native CI/CD support

17. What monitoring tools are used for Kubernetes?

Answer:

Popular monitoring tools include:

  • Prometheus & Grafana: Metric collection and visualization.
  • Elasticsearch, Fluentd, Kibana (EFK): Log aggregation.
  • Jaeger & OpenTelemetry: Distributed tracing.

18. How do you troubleshoot Kubernetes cluster issues?

Answer:

  • Use kubectl describe pod <pod_name> for pod details.
  • Check logs with kubectl logs <pod_name>.
  • Debug using kubectl exec -it <pod_name> -- /bin/sh.
  • View events with kubectl get events.

19. What is Kubernetes Federation?

Answer:

Kubernetes Federation allows management of multiple clusters as a single entity, improving disaster recovery, load balancing, and multi-cloud deployments.

20. What are sidecar containers in Kubernetes?

Answer:

Sidecar containers run alongside primary containers in a pod, extending functionality like logging, monitoring, or proxying. Examples include Envoy proxy for service meshes.


Final Thoughts

Mastering Kubernetes is essential for cloud-native development and DevOps roles. These interview questions cover core concepts, best practices, and real-world scenarios to help you excel in your Kubernetes interview in 2025.

Happy Learning and Good Luck with Your Interview!

Troubleshooting Docker Image Format: Ensuring Docker v2 Instead of OCI

  Troubleshooting Docker Image Format: Ensuring Docker v2 Instead of OCI Introduction While working with Docker 27+ , I encountered an iss...